llm-context


Namellm-context JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttps://github.com/cyberchitta/llm-context.py
SummaryA command-line tool for copying code context to clipboard for use in LLM chats
upload_time2024-09-20 15:53:33
maintainerNone
docs_urlNone
authorrestlessronin
requires_python<4.0,>=3.9
licenseApache-2.0
keywords llm ai context code clipboard chat
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM Context

[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![PyPI version](https://img.shields.io/pypi/v/llm-context.svg)](https://pypi.org/project/llm-context/)

LLM Context is a command-line tool designed to help developers efficiently copy and paste relevant context from code / text repositories into the web-chat interface of Large Language Models (LLMs). It leverages `.gitignore` patterns for smart file selection and uses the clipboard for seamless integration with LLM interfaces.

> **Note**: This project was developed in collaboration with Claude-3.5-Sonnet, using LLM Context itself to share code context during development. All code in the repository is human-curated (by me 😇, @restlessronin).

## Current Usage Patterns

- **LLM Integration**: Primarily used with Claude (Project Knowledge) and GPT (Knowledge), but supports all LLM chat interfaces.
- **Project Types**: Suitable for code repositories and collections of text/markdown/html documents.
- **Project Size**: Optimized for projects that fit within an LLM's context window. Large project support is in development.

## Installation

Use [pipx](https://pypa.github.io/pipx/) to install LLM Context:

```
pipx install llm-context
```

## Usage

LLM Context enables rapid project context updates for each AI chat.

### Quick Start and Typical Workflow

1. [Install LLM Context](#installation) if you haven't already.
2. Navigate to your project's root directory.
3. Run `lc-init` to set up LLM Context for your project (only needed once per repository).
4. For chat interfaces with built-in context storage (e.g., Claude Pro Projects, ChatGPT Plus GPTs):
   - Set up your custom prompt manually in the chat interface.
   - A default prompt is available in `.llm-context/templates/lc-prompt.md`.
5. (Optional) Edit `.llm-context/config.toml` to [add custom ignore patterns](#customizing-ignore-patterns).
6. Run `lc-sel-files` to select files for full content inclusion.
7. (Optional) [Review the selected file](#reviewing-selected-files) list in `.llm-context/curr_ctx.toml`.
8. Generate and copy the context:
   - For chat interfaces with built-in storage: Run `lc-context`
   - For other interfaces (including free plans): Run `lc-context --with-prompt` to include the default prompt
9. Paste the generated context:
   - For interfaces with built-in storage: Into the Project Knowledge (Claude Pro) or GPT Knowledge (ChatGPT Plus) section
   - For other interfaces: Into the system message or the first chat message, as appropriate
10. Start your conversation with the LLM about your project.

To maintain up-to-date AI assistance:
- Repeat steps 6-9 at the start of each new chat session. This process takes only seconds.
- For interfaces with built-in storage, update your custom prompt separately if needed.

### Handling LLM File Requests

When the LLM asks for a file that isn't in the current context:

1. Copy the LLM's file request (typically in a markdown block) to your clipboard.
2. Run `lc-read-cliplist` to generate the content of the requested files.
3. Paste the generated file contents back into your chat with the LLM.

### Configuration

#### Customizing Ignore Patterns

Add custom ignore patterns in `.llm-context/config.toml` to exclude specific files or directories not covered by your project's `.gitignore`. This is useful for versioned files that don't contribute to code context, such as media files, large generated files, detailed changelogs, or environment-specific configurations.

Example:

```toml
# /.llm-context/config.toml
[gitignores]
full_files = [
  "*.svg",
  "*.png",
  "CHANGELOG.md",
  ".env",
  # Add more patterns here
]
```

#### Reviewing Selected Files

Review the list of selected files in `.llm-context/curr_ctx.toml` to check what's included in the context. This is particularly useful when trying to minimize context size.

```toml
# /.llm-context/curr_ctx.toml
[context]
full = [
  "/llm-context.py/pyproject.toml",
  # more files ...
]
```

## Command Reference

- `lc-init`: Initialize LLM Context for your project (only needed once per repository)
- `lc-sel-files`: Select files for full content inclusion
- `lc-sel-outlines`: Select files for outline inclusion (experimental)
- `lc-context`: Generate and copy context to clipboard
  - Use `--with-prompt` flag to include the prompt for LLMs without built-in storage
- `lc-read-cliplist`: Read contents for LLM-requested files, and copy to clipboard

## Experimental: Handling Larger Repositories

For larger projects, we're exploring a combined approach of full file content and file outlines. Use `lc-sel-outlines` after `lc-sel-files` to experiment with this feature.

**Note:** The outlining feature currently supports the following programming languages:
C, C++, C#, Elisp, Elixir, Elm, Go, Java, JavaScript, OCaml, PHP, Python, QL, Ruby, Rust, and TypeScript. Files in unsupported languages will not be outlined and will be excluded from the outline selection.

### Feedback and Contributions

We welcome feedback, issue reports, and pull requests on our [GitHub repository](https://github.com/cyberchitta/llm-context.py).

## Acknowledgments

LLM Context evolves from a lineage of AI-assisted development tools:

- This project succeeds [LLM Code Highlighter](https://github.com/restlessronin/llm-code-highlighter), a TypeScript library I developed for IDE integration.
- The concept originated from my work on [RubberDuck](https://github.com/rubberduck-ai/rubberduck-vscode) and continued with later contributions to [Continue](https://github.com/continuedev/continuedev).
- LLM Code Highlighter was heavily inspired by [Aider Chat](https://github.com/paul-gauthier/aider). I worked with GPT-4 to translate several Aider Chat Python modules into TypeScript, maintaining functionality while restructuring the code.
- This project uses tree-sitter [tag query files](src/llm_context/highlighter/tag-qry/) from Aider Chat.
- LLM Context exemplifies the power of AI-assisted development, transitioning from Python to TypeScript and back to Python with the help of GPT-4 and Claude-3.5-Sonnet.

I am grateful for the open-source community's innovations and the AI assistance that have shaped this project's evolution.

I am grateful for the help of Claude-3.5-Sonnet in the development of this project.

## License

This project is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/cyberchitta/llm-context.py",
    "name": "llm-context",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": "llm, ai, context, code, clipboard, chat",
    "author": "restlessronin",
    "author_email": "88921269+restlessronin@users.noreply.github.com",
    "download_url": "https://files.pythonhosted.org/packages/a0/b6/237375a325e4f22cb255c374b4d11e3e07a42c76c43cfea82a11512899ff/llm_context-0.1.0.tar.gz",
    "platform": null,
    "description": "# LLM Context\n\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![PyPI version](https://img.shields.io/pypi/v/llm-context.svg)](https://pypi.org/project/llm-context/)\n\nLLM Context is a command-line tool designed to help developers efficiently copy and paste relevant context from code / text repositories into the web-chat interface of Large Language Models (LLMs). It leverages `.gitignore` patterns for smart file selection and uses the clipboard for seamless integration with LLM interfaces.\n\n> **Note**: This project was developed in collaboration with Claude-3.5-Sonnet, using LLM Context itself to share code context during development. All code in the repository is human-curated (by me \ud83d\ude07, @restlessronin).\n\n## Current Usage Patterns\n\n- **LLM Integration**: Primarily used with Claude (Project Knowledge) and GPT (Knowledge), but supports all LLM chat interfaces.\n- **Project Types**: Suitable for code repositories and collections of text/markdown/html documents.\n- **Project Size**: Optimized for projects that fit within an LLM's context window. Large project support is in development.\n\n## Installation\n\nUse [pipx](https://pypa.github.io/pipx/) to install LLM Context:\n\n```\npipx install llm-context\n```\n\n## Usage\n\nLLM Context enables rapid project context updates for each AI chat.\n\n### Quick Start and Typical Workflow\n\n1. [Install LLM Context](#installation) if you haven't already.\n2. Navigate to your project's root directory.\n3. Run `lc-init` to set up LLM Context for your project (only needed once per repository).\n4. For chat interfaces with built-in context storage (e.g., Claude Pro Projects, ChatGPT Plus GPTs):\n   - Set up your custom prompt manually in the chat interface.\n   - A default prompt is available in `.llm-context/templates/lc-prompt.md`.\n5. (Optional) Edit `.llm-context/config.toml` to [add custom ignore patterns](#customizing-ignore-patterns).\n6. Run `lc-sel-files` to select files for full content inclusion.\n7. (Optional) [Review the selected file](#reviewing-selected-files) list in `.llm-context/curr_ctx.toml`.\n8. Generate and copy the context:\n   - For chat interfaces with built-in storage: Run `lc-context`\n   - For other interfaces (including free plans): Run `lc-context --with-prompt` to include the default prompt\n9. Paste the generated context:\n   - For interfaces with built-in storage: Into the Project Knowledge (Claude Pro) or GPT Knowledge (ChatGPT Plus) section\n   - For other interfaces: Into the system message or the first chat message, as appropriate\n10. Start your conversation with the LLM about your project.\n\nTo maintain up-to-date AI assistance:\n- Repeat steps 6-9 at the start of each new chat session. This process takes only seconds.\n- For interfaces with built-in storage, update your custom prompt separately if needed.\n\n### Handling LLM File Requests\n\nWhen the LLM asks for a file that isn't in the current context:\n\n1. Copy the LLM's file request (typically in a markdown block) to your clipboard.\n2. Run `lc-read-cliplist` to generate the content of the requested files.\n3. Paste the generated file contents back into your chat with the LLM.\n\n### Configuration\n\n#### Customizing Ignore Patterns\n\nAdd custom ignore patterns in `.llm-context/config.toml` to exclude specific files or directories not covered by your project's `.gitignore`. This is useful for versioned files that don't contribute to code context, such as media files, large generated files, detailed changelogs, or environment-specific configurations.\n\nExample:\n\n```toml\n# /.llm-context/config.toml\n[gitignores]\nfull_files = [\n  \"*.svg\",\n  \"*.png\",\n  \"CHANGELOG.md\",\n  \".env\",\n  # Add more patterns here\n]\n```\n\n#### Reviewing Selected Files\n\nReview the list of selected files in `.llm-context/curr_ctx.toml` to check what's included in the context. This is particularly useful when trying to minimize context size.\n\n```toml\n# /.llm-context/curr_ctx.toml\n[context]\nfull = [\n  \"/llm-context.py/pyproject.toml\",\n  # more files ...\n]\n```\n\n## Command Reference\n\n- `lc-init`: Initialize LLM Context for your project (only needed once per repository)\n- `lc-sel-files`: Select files for full content inclusion\n- `lc-sel-outlines`: Select files for outline inclusion (experimental)\n- `lc-context`: Generate and copy context to clipboard\n  - Use `--with-prompt` flag to include the prompt for LLMs without built-in storage\n- `lc-read-cliplist`: Read contents for LLM-requested files, and copy to clipboard\n\n## Experimental: Handling Larger Repositories\n\nFor larger projects, we're exploring a combined approach of full file content and file outlines. Use `lc-sel-outlines` after `lc-sel-files` to experiment with this feature.\n\n**Note:** The outlining feature currently supports the following programming languages:\nC, C++, C#, Elisp, Elixir, Elm, Go, Java, JavaScript, OCaml, PHP, Python, QL, Ruby, Rust, and TypeScript. Files in unsupported languages will not be outlined and will be excluded from the outline selection.\n\n### Feedback and Contributions\n\nWe welcome feedback, issue reports, and pull requests on our [GitHub repository](https://github.com/cyberchitta/llm-context.py).\n\n## Acknowledgments\n\nLLM Context evolves from a lineage of AI-assisted development tools:\n\n- This project succeeds [LLM Code Highlighter](https://github.com/restlessronin/llm-code-highlighter), a TypeScript library I developed for IDE integration.\n- The concept originated from my work on [RubberDuck](https://github.com/rubberduck-ai/rubberduck-vscode) and continued with later contributions to [Continue](https://github.com/continuedev/continuedev).\n- LLM Code Highlighter was heavily inspired by [Aider Chat](https://github.com/paul-gauthier/aider). I worked with GPT-4 to translate several Aider Chat Python modules into TypeScript, maintaining functionality while restructuring the code.\n- This project uses tree-sitter [tag query files](src/llm_context/highlighter/tag-qry/) from Aider Chat.\n- LLM Context exemplifies the power of AI-assisted development, transitioning from Python to TypeScript and back to Python with the help of GPT-4 and Claude-3.5-Sonnet.\n\nI am grateful for the open-source community's innovations and the AI assistance that have shaped this project's evolution.\n\nI am grateful for the help of Claude-3.5-Sonnet in the development of this project.\n\n## License\n\nThis project is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for details.\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "A command-line tool for copying code context to clipboard for use in LLM chats",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://github.com/cyberchitta/llm-context.py",
        "Repository": "https://github.com/cyberchitta/llm-context.py"
    },
    "split_keywords": [
        "llm",
        " ai",
        " context",
        " code",
        " clipboard",
        " chat"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "24d134384f0886e1d79cacfb11883a145616a38a92cf321344c7eca7189f41cb",
                "md5": "5f158afcffc4ef355f076d93d14bdc4d",
                "sha256": "9bc0cee67f748b137b9d48e639fb84f38fe2c0c691e85caf05db42db6338a204"
            },
            "downloads": -1,
            "filename": "llm_context-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5f158afcffc4ef355f076d93d14bdc4d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 33913,
            "upload_time": "2024-09-20T15:53:31",
            "upload_time_iso_8601": "2024-09-20T15:53:31.361616Z",
            "url": "https://files.pythonhosted.org/packages/24/d1/34384f0886e1d79cacfb11883a145616a38a92cf321344c7eca7189f41cb/llm_context-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a0b6237375a325e4f22cb255c374b4d11e3e07a42c76c43cfea82a11512899ff",
                "md5": "e0c808d0f1a5a24a4836420a9b0e4776",
                "sha256": "8738f33322f441b7023d811475e7dc06b883928cddc18bf11159a610e8854e2e"
            },
            "downloads": -1,
            "filename": "llm_context-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "e0c808d0f1a5a24a4836420a9b0e4776",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 26785,
            "upload_time": "2024-09-20T15:53:33",
            "upload_time_iso_8601": "2024-09-20T15:53:33.718796Z",
            "url": "https://files.pythonhosted.org/packages/a0/b6/237375a325e4f22cb255c374b4d11e3e07a42c76c43cfea82a11512899ff/llm_context-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-20 15:53:33",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cyberchitta",
    "github_project": "llm-context.py",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "llm-context"
}
        
Elapsed time: 0.60647s