# kopipasta
[](https://pypi.python.org/pypi/kopipasta)
[](http://pepy.tech/project/kopipasta)
A CLI tool for taking **full, transparent control** of your LLM context. No black boxes.
<img src="kopipasta.jpg" alt="kopipasta" width="300">
- An LLM told me that "kopi" means Coffee in some languages... and a Diffusion model then made this delicious soup.
## The Philosophy: You Control the Context
Many AI coding assistants use Retrieval-Augmented Generation (RAG) to automatically find what *they think* is relevant context. This is a black box. When the LLM gives a bad answer, you can't debug it because you don't know what context it was actually given.
**`kopipasta` is the opposite.** I built it for myself on the principle of **explicit context control**. You are in the driver's seat. You decide *exactly* what files, functions, and snippets go into the prompt. This transparency is the key to getting reliable, debuggable results from an LLM.
It's a "smart copy" command for your project, not a magic wand.
## How It Works
The workflow is dead simple:
1. **Gather:** Run `kopipasta` and point it at the files, directories, and URLs that matter for your task.
2. **Select:** The tool interactively helps you choose what to include. For large files, you can send just a snippet or even hand-pick individual functions.
3. **Define:** Your default editor (`$EDITOR`) opens for you to write your instructions to the LLM.
4. **Paste:** The final, comprehensive prompt is now on your clipboard, ready to be pasted into ChatGPT, Gemini, Claude, or your LLM of choice.
## Installation
```bash
# Using pipx (recommended for CLI tools)
pipx install kopipasta
# Or using standard pip
pip install kopipasta
```
## Usage
```bash
kopipasta [options] [files_or_directories_or_urls...]
```
**Arguments:**
* `[files_or_directories_or_urls...]`: One or more paths to files, directories, or web URLs to use as the starting point for your context.
**Options:**
* `-t TASK`, `--task TASK`: Provide the task description directly on the command line, skipping the editor.
## Key Features
* **Total Context Control:** Interactively select files, directories, snippets, or even individual functions. You see everything that goes into the prompt.
* **Transparent & Explicit:** No hidden RAG. You know exactly what's in the prompt because you built it. This makes debugging LLM failures possible.
* **Web-Aware:** Pulls in content directly from URLs—perfect for API documentation.
* **Safety First:**
* Automatically respects your `.gitignore` rules.
* Detects if you're about to include secrets from a `.env` file and asks what to do.
* **Context-Aware:** Keeps a running total of the prompt size (in characters and estimated tokens) so you don't overload the LLM's context window.
* **Developer-Friendly:**
* Uses your familiar `$EDITOR` for writing task descriptions.
* Copies the final prompt directly to your clipboard.
* Provides syntax highlighting during chunk selection.
## A Real-World Example
I had a bug where my `setup.py` didn't include all the dependencies from `requirements.txt`.
1. I ran `kopipasta -t "Update setup.py to read dependencies dynamically from requirements.txt" setup.py requirements.txt`.
2. The tool confirmed the inclusion of both files and copied the complete prompt to my clipboard.
3. I pasted the prompt into my LLM chat window.
4. I copied the LLM's suggested code back into my local `setup.py`.
5. I tested the changes and committed.
No manual file reading, no clumsy copy-pasting, just a clean, context-rich prompt that I had full control over.
## Configuration
Set your preferred command-line editor via the `EDITOR` environment variable.
```bash
export EDITOR=nvim # or vim, nano, code --wait, etc.
```
Raw data
{
"_id": null,
"home_page": "https://github.com/mkorpela/kopipasta",
"name": "kopipasta",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Mikko Korpela",
"author_email": "mikko.korpela@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/4a/1b/5c4a5ec6a567fd6909e0e4e414e4675258c5daae2c7e8529f122979fad76/kopipasta-0.37.0.tar.gz",
"platform": null,
"description": "# kopipasta\n\n[](https://pypi.python.org/pypi/kopipasta)\n[](http://pepy.tech/project/kopipasta)\n\nA CLI tool for taking **full, transparent control** of your LLM context. No black boxes.\n\n<img src=\"kopipasta.jpg\" alt=\"kopipasta\" width=\"300\">\n\n- An LLM told me that \"kopi\" means Coffee in some languages... and a Diffusion model then made this delicious soup.\n\n## The Philosophy: You Control the Context\n\nMany AI coding assistants use Retrieval-Augmented Generation (RAG) to automatically find what *they think* is relevant context. This is a black box. When the LLM gives a bad answer, you can't debug it because you don't know what context it was actually given.\n\n**`kopipasta` is the opposite.** I built it for myself on the principle of **explicit context control**. You are in the driver's seat. You decide *exactly* what files, functions, and snippets go into the prompt. This transparency is the key to getting reliable, debuggable results from an LLM.\n\nIt's a \"smart copy\" command for your project, not a magic wand.\n\n## How It Works\n\nThe workflow is dead simple:\n\n1. **Gather:** Run `kopipasta` and point it at the files, directories, and URLs that matter for your task.\n2. **Select:** The tool interactively helps you choose what to include. For large files, you can send just a snippet or even hand-pick individual functions.\n3. **Define:** Your default editor (`$EDITOR`) opens for you to write your instructions to the LLM.\n4. **Paste:** The final, comprehensive prompt is now on your clipboard, ready to be pasted into ChatGPT, Gemini, Claude, or your LLM of choice.\n\n## Installation\n\n```bash\n# Using pipx (recommended for CLI tools)\npipx install kopipasta\n\n# Or using standard pip\npip install kopipasta\n```\n\n## Usage\n\n```bash\nkopipasta [options] [files_or_directories_or_urls...]\n```\n\n**Arguments:**\n\n* `[files_or_directories_or_urls...]`: One or more paths to files, directories, or web URLs to use as the starting point for your context.\n\n**Options:**\n\n* `-t TASK`, `--task TASK`: Provide the task description directly on the command line, skipping the editor.\n\n## Key Features\n\n* **Total Context Control:** Interactively select files, directories, snippets, or even individual functions. You see everything that goes into the prompt.\n* **Transparent & Explicit:** No hidden RAG. You know exactly what's in the prompt because you built it. This makes debugging LLM failures possible.\n* **Web-Aware:** Pulls in content directly from URLs\u2014perfect for API documentation.\n* **Safety First:**\n * Automatically respects your `.gitignore` rules.\n * Detects if you're about to include secrets from a `.env` file and asks what to do.\n* **Context-Aware:** Keeps a running total of the prompt size (in characters and estimated tokens) so you don't overload the LLM's context window.\n* **Developer-Friendly:**\n * Uses your familiar `$EDITOR` for writing task descriptions.\n * Copies the final prompt directly to your clipboard.\n * Provides syntax highlighting during chunk selection.\n\n## A Real-World Example\n\nI had a bug where my `setup.py` didn't include all the dependencies from `requirements.txt`.\n\n1. I ran `kopipasta -t \"Update setup.py to read dependencies dynamically from requirements.txt\" setup.py requirements.txt`.\n2. The tool confirmed the inclusion of both files and copied the complete prompt to my clipboard.\n3. I pasted the prompt into my LLM chat window.\n4. I copied the LLM's suggested code back into my local `setup.py`.\n5. I tested the changes and committed.\n\nNo manual file reading, no clumsy copy-pasting, just a clean, context-rich prompt that I had full control over.\n\n## Configuration\n\nSet your preferred command-line editor via the `EDITOR` environment variable.\n```bash\nexport EDITOR=nvim # or vim, nano, code --wait, etc.\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A CLI tool to generate prompts with project structure and file contents",
"version": "0.37.0",
"project_urls": {
"Homepage": "https://github.com/mkorpela/kopipasta"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ebb0f70ce4b1ec105a10f3205c3bdb6caec3a4c50ca0a44a230eb995fc9d8045",
"md5": "edc5bfb6abb3aa664058bad3dc979530",
"sha256": "3f337dc803a5479c2bbb819cf3c16b10da62b39fed07a172ac74e17750f46679"
},
"downloads": -1,
"filename": "kopipasta-0.37.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "edc5bfb6abb3aa664058bad3dc979530",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 31982,
"upload_time": "2025-10-21T20:33:37",
"upload_time_iso_8601": "2025-10-21T20:33:37.359904Z",
"url": "https://files.pythonhosted.org/packages/eb/b0/f70ce4b1ec105a10f3205c3bdb6caec3a4c50ca0a44a230eb995fc9d8045/kopipasta-0.37.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4a1b5c4a5ec6a567fd6909e0e4e414e4675258c5daae2c7e8529f122979fad76",
"md5": "ccfb116f062b8de8eac5b990b9d930bb",
"sha256": "8e7d6da26ac0a14a8b7b7f9d2d0c178037b61a8d24d80f74820acc9a4f8480cc"
},
"downloads": -1,
"filename": "kopipasta-0.37.0.tar.gz",
"has_sig": false,
"md5_digest": "ccfb116f062b8de8eac5b990b9d930bb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 33959,
"upload_time": "2025-10-21T20:33:38",
"upload_time_iso_8601": "2025-10-21T20:33:38.797499Z",
"url": "https://files.pythonhosted.org/packages/4a/1b/5c4a5ec6a567fd6909e0e4e414e4675258c5daae2c7e8529f122979fad76/kopipasta-0.37.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-21 20:33:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mkorpela",
"github_project": "kopipasta",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "pyperclip",
"specs": [
[
"==",
"1.9.0"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.3"
]
]
},
{
"name": "Pygments",
"specs": [
[
"==",
"2.18.0"
]
]
},
{
"name": "rich",
"specs": [
[
"==",
"13.8.1"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.2.1"
]
]
}
],
"lcname": "kopipasta"
}