# kopipasta
[](https://pypi.python.org/pypi/kopipasta)
[](http://pepy.tech/project/kopipasta)
A CLI tool for taking **full, transparent control** of your LLM context. No black boxes.
<img src="kopipasta.jpg" alt="kopipasta" width="300">
- An LLM told me that "kopi" means Coffee in some languages... and a Diffusion model then made this delicious soup.
## The Philosophy: You Control the Context
Many AI coding assistants use Retrieval-Augmented Generation (RAG) to automatically find what *they think* is relevant context. This is a black box. When the LLM gives a bad answer, you can't debug it because you don't know what context it was actually given.
**`kopipasta` is the opposite.** I built it for myself on the principle of **explicit context control**. You are in the driver's seat. You decide *exactly* what files, functions, and snippets go into the prompt. This transparency is the key to getting reliable, debuggable results from an LLM.
It's a "smart copy" command for your project, not a magic wand.
## How It Works
The workflow is dead simple:
1. **Gather:** Run `kopipasta` and point it at the files, directories, and URLs that matter for your task.
2. **Select:** The tool interactively helps you choose what to include. For large files, you can send just a snippet or even hand-pick individual functions.
3. **Define:** Your default editor (`$EDITOR`) opens for you to write your instructions to the LLM.
4. **Paste:** The final, comprehensive prompt is now on your clipboard, ready to be pasted into ChatGPT, Gemini, Claude, or your LLM of choice.
## Installation
```bash
# Using pipx (recommended for CLI tools)
pipx install kopipasta
# Or using standard pip
pip install kopipasta
```
## Usage
```bash
kopipasta [options] [files_or_directories_or_urls...]
```
**Arguments:**
* `[files_or_directories_or_urls...]`: One or more paths to files, directories, or web URLs to use as the starting point for your context.
**Options:**
* `-t TASK`, `--task TASK`: Provide the task description directly on the command line, skipping the editor.
## Key Features
* **Total Context Control:** Interactively select files, directories, snippets, or even individual functions. You see everything that goes into the prompt.
* **Transparent & Explicit:** No hidden RAG. You know exactly what's in the prompt because you built it. This makes debugging LLM failures possible.
* **Web-Aware:** Pulls in content directly from URLs—perfect for API documentation.
* **Safety First:**
* Automatically respects your `.gitignore` rules.
* Detects if you're about to include secrets from a `.env` file and asks what to do.
* **Context-Aware:** Keeps a running total of the prompt size (in characters and estimated tokens) so you don't overload the LLM's context window.
* **Developer-Friendly:**
* Uses your familiar `$EDITOR` for writing task descriptions.
* Copies the final prompt directly to your clipboard.
* Provides syntax highlighting during chunk selection.
## A Real-World Example
I had a bug where my `setup.py` didn't include all the dependencies from `requirements.txt`.
1. I ran `kopipasta -t "Update setup.py to read dependencies dynamically from requirements.txt" setup.py requirements.txt`.
2. The tool confirmed the inclusion of both files and copied the complete prompt to my clipboard.
3. I pasted the prompt into my LLM chat window.
4. I copied the LLM's suggested code back into my local `setup.py`.
5. I tested the changes and committed.
No manual file reading, no clumsy copy-pasting, just a clean, context-rich prompt that I had full control over.
## Configuration
Set your preferred command-line editor via the `EDITOR` environment variable.
```bash
export EDITOR=nvim # or vim, nano, code --wait, etc.
```
Raw data
{
"_id": null,
"home_page": "https://github.com/mkorpela/kopipasta",
"name": "kopipasta",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Mikko Korpela",
"author_email": "mikko.korpela@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/60/6c/88dbe6e580f51e9b7d7485b5c7f681156400908b9666c5a64fd8af283dad/kopipasta-0.31.0.tar.gz",
"platform": null,
"description": "# kopipasta\n\n[](https://pypi.python.org/pypi/kopipasta)\n[](http://pepy.tech/project/kopipasta)\n\nA CLI tool for taking **full, transparent control** of your LLM context. No black boxes.\n\n<img src=\"kopipasta.jpg\" alt=\"kopipasta\" width=\"300\">\n\n- An LLM told me that \"kopi\" means Coffee in some languages... and a Diffusion model then made this delicious soup.\n\n## The Philosophy: You Control the Context\n\nMany AI coding assistants use Retrieval-Augmented Generation (RAG) to automatically find what *they think* is relevant context. This is a black box. When the LLM gives a bad answer, you can't debug it because you don't know what context it was actually given.\n\n**`kopipasta` is the opposite.** I built it for myself on the principle of **explicit context control**. You are in the driver's seat. You decide *exactly* what files, functions, and snippets go into the prompt. This transparency is the key to getting reliable, debuggable results from an LLM.\n\nIt's a \"smart copy\" command for your project, not a magic wand.\n\n## How It Works\n\nThe workflow is dead simple:\n\n1. **Gather:** Run `kopipasta` and point it at the files, directories, and URLs that matter for your task.\n2. **Select:** The tool interactively helps you choose what to include. For large files, you can send just a snippet or even hand-pick individual functions.\n3. **Define:** Your default editor (`$EDITOR`) opens for you to write your instructions to the LLM.\n4. **Paste:** The final, comprehensive prompt is now on your clipboard, ready to be pasted into ChatGPT, Gemini, Claude, or your LLM of choice.\n\n## Installation\n\n```bash\n# Using pipx (recommended for CLI tools)\npipx install kopipasta\n\n# Or using standard pip\npip install kopipasta\n```\n\n## Usage\n\n```bash\nkopipasta [options] [files_or_directories_or_urls...]\n```\n\n**Arguments:**\n\n* `[files_or_directories_or_urls...]`: One or more paths to files, directories, or web URLs to use as the starting point for your context.\n\n**Options:**\n\n* `-t TASK`, `--task TASK`: Provide the task description directly on the command line, skipping the editor.\n\n## Key Features\n\n* **Total Context Control:** Interactively select files, directories, snippets, or even individual functions. You see everything that goes into the prompt.\n* **Transparent & Explicit:** No hidden RAG. You know exactly what's in the prompt because you built it. This makes debugging LLM failures possible.\n* **Web-Aware:** Pulls in content directly from URLs\u2014perfect for API documentation.\n* **Safety First:**\n * Automatically respects your `.gitignore` rules.\n * Detects if you're about to include secrets from a `.env` file and asks what to do.\n* **Context-Aware:** Keeps a running total of the prompt size (in characters and estimated tokens) so you don't overload the LLM's context window.\n* **Developer-Friendly:**\n * Uses your familiar `$EDITOR` for writing task descriptions.\n * Copies the final prompt directly to your clipboard.\n * Provides syntax highlighting during chunk selection.\n\n## A Real-World Example\n\nI had a bug where my `setup.py` didn't include all the dependencies from `requirements.txt`.\n\n1. I ran `kopipasta -t \"Update setup.py to read dependencies dynamically from requirements.txt\" setup.py requirements.txt`.\n2. The tool confirmed the inclusion of both files and copied the complete prompt to my clipboard.\n3. I pasted the prompt into my LLM chat window.\n4. I copied the LLM's suggested code back into my local `setup.py`.\n5. I tested the changes and committed.\n\nNo manual file reading, no clumsy copy-pasting, just a clean, context-rich prompt that I had full control over.\n\n## Configuration\n\nSet your preferred command-line editor via the `EDITOR` environment variable.\n```bash\nexport EDITOR=nvim # or vim, nano, code --wait, etc.\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A CLI tool to generate prompts with project structure and file contents",
"version": "0.31.0",
"project_urls": {
"Homepage": "https://github.com/mkorpela/kopipasta"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a1324973ff3c0a9b8ad3623b9c7f16b27aebf9acdb17128ab7278e99646ba25e",
"md5": "136f28ac0db9b7c5bd8a9d1aca1c86e2",
"sha256": "e22117743245d5fbbadc50970ac86fdee1d840f31351511a177579b69cc95cec"
},
"downloads": -1,
"filename": "kopipasta-0.31.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "136f28ac0db9b7c5bd8a9d1aca1c86e2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 28408,
"upload_time": "2025-09-04T19:20:55",
"upload_time_iso_8601": "2025-09-04T19:20:55.000078Z",
"url": "https://files.pythonhosted.org/packages/a1/32/4973ff3c0a9b8ad3623b9c7f16b27aebf9acdb17128ab7278e99646ba25e/kopipasta-0.31.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "606c88dbe6e580f51e9b7d7485b5c7f681156400908b9666c5a64fd8af283dad",
"md5": "0732a07262c4654b72b0bb20ccc8002b",
"sha256": "26eae4733f8d5cdd242d2ccdf59ff749019b2faa142fc5955585b20fcb7818b7"
},
"downloads": -1,
"filename": "kopipasta-0.31.0.tar.gz",
"has_sig": false,
"md5_digest": "0732a07262c4654b72b0bb20ccc8002b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 28370,
"upload_time": "2025-09-04T19:20:56",
"upload_time_iso_8601": "2025-09-04T19:20:56.123453Z",
"url": "https://files.pythonhosted.org/packages/60/6c/88dbe6e580f51e9b7d7485b5c7f681156400908b9666c5a64fd8af283dad/kopipasta-0.31.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-04 19:20:56",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mkorpela",
"github_project": "kopipasta",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "pyperclip",
"specs": [
[
"==",
"1.9.0"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.3"
]
]
},
{
"name": "Pygments",
"specs": [
[
"==",
"2.18.0"
]
]
},
{
"name": "rich",
"specs": [
[
"==",
"13.8.1"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.2.1"
]
]
}
],
"lcname": "kopipasta"
}