llm-devtale


Namellm-devtale JSON
Version 0.2.3 PyPI version JSON
download
home_pageNone
SummaryLLM plugin to create a hierarchical summary description of a project
upload_time2025-07-23 09:41:58
maintainerNone
docs_urlNone
authorJavier Vela
requires_python>=3.12
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llm-devtale

`llm-devtale` is a plugin for [Simon Willison's LLM tool](https://github.com/simonw/llm) `llm` command-line tool that automatically generates documentation ("dev tales") for your source code projects. It analyzes your project's files and folders, considering factors like file size, git commit effort, and excluded patterns, to produce a hierarchical, LLM-generated summary of your codebase.

The generated documentation includes:
*   A high-level overview of the entire repository.
*   Summaries for each analyzed folder.
*   Detailed "dev tales" for individual source code files.

To avoid analyzing too much boilerplate or non-important data,  the program expects the folder to be a git repository, so it can order the files by number of commits to focus on the most important files (most commits) within the limits of the tokens configured

Taken inspiration (and some code) from https://github.com/tenxstudio/devtale and https://github.com/irthomasthomas/llm-cartographer

The tool uses threads to speed up the analysis, up to maximum 8 threads, only for file summarization.
## Installation

First, ensure you have `llm` installed:
```bash
uv tool install llm
```

Then, install the `llm-devtale` plugin:
```bash
llm install llm-devtale
```

## Usage

Once installed, the `devtale` command will be available through the `llm` CLI:

```bash
llm devtale [DIRECTORY] [OPTIONS]
```

By default, `DIRECTORY` is the current working directory (`.`).

## Examples

### Generate documentation for the current directory

This will output the generated documentation to your console.
```bash
llm devtale .
```

### Output documentation to a file

Save the generated documentation to `PROJECT_README.md`.
```bash
llm devtale . -o PROJECT_README.md
```

### Use a specific LLM model

Generate documentation using `gpt-4`:
```bash
llm devtale . -m gpt4
```

### Exclude specific files or folders

Exclude all files under `test/` directories and `docs/` folders using gitignore-style patterns:
```bash
llm devtale . -e "**/test/*" -e "docs/"
```

### Filter by file extension

Only include Python (`.py`) and JavaScript (`.js`) files in the analysis (do NOT forget the '\*' before the extension):
```bash
llm devtale . -f *.py -f *.js
```
### Filter by directory

Only include folders src/app and src/utils:
```bash
llm devtale . -k src/app -k src/utils
```

### Limit token usage

Specify the maximum number of tokens to send to the LLM for the entire project and per file:
```bash
llm devtale . --max-tokens 50000 --max-tokens-per-file 5000
```

### Perform a dry run

See which files and folders would be analyzed without actually calling the LLM. This shows the project hierarchy and token counts.
```bash
llm devtale . --dry-run
```

### Add additional instructions to the prompt

Add additional instructions to the end of all LLM prompts.
```bash
llm devtale -p "All summaries should be in uppercase" .
```


## Options

*   `DIRECTORY`: Path to the project directory (default: `.`)
*   `-e, --exclude <PATTERN>`: Patterns to exclude files/folders (gitignore format). Can be used multiple times.
*   `--max-tokens <INT>`: Maximum total tokens to send to the LLM for the entire project.
*   `--max-tokens-per-file <INT>`: Maximum tokens to process per individual file.
*   `-o, --output <PATH>`: Output file path or directory to save the generated documentation.
*   `-m, --model <MODEL_NAME>`: Specify the LLM model to use (e.g., `gpt4`). If not set uses the default model configured in the llm cli tool
*   `-f, --filter-extension <EXTENSION>`: Only include files with these extensions (e.g., `*.py`, `*.md`). Can be used multiple times.
*   `-t, --dry-run`: Show the hierarchy and files that will be analyzed without making LLM calls.
*   `-d, --debug`: Turn on verbose logging.
*   `-k, --filter-folder`: Only parse the specified folder(s)
*   `-p, --prompt`: Additional prompt to be added at the end of the program prompt

## Debug
The program can be executed using an ad-hoc main.py file added for convenience:
```
python -m llm_devtale.main
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-devtale",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": null,
    "author": "Javier Vela",
    "author_email": "Javier Vela <jvdiago@dagorlad.es>",
    "download_url": "https://files.pythonhosted.org/packages/84/e8/4650dabceca4dd0ed90c47158fb50f7a95fb9870be6f444866c1c1a4e082/llm_devtale-0.2.3.tar.gz",
    "platform": null,
    "description": "# llm-devtale\n\n`llm-devtale` is a plugin for [Simon Willison's LLM tool](https://github.com/simonw/llm) `llm` command-line tool that automatically generates documentation (\"dev tales\") for your source code projects. It analyzes your project's files and folders, considering factors like file size, git commit effort, and excluded patterns, to produce a hierarchical, LLM-generated summary of your codebase.\n\nThe generated documentation includes:\n*   A high-level overview of the entire repository.\n*   Summaries for each analyzed folder.\n*   Detailed \"dev tales\" for individual source code files.\n\nTo avoid analyzing too much boilerplate or non-important data,  the program expects the folder to be a git repository, so it can order the files by number of commits to focus on the most important files (most commits) within the limits of the tokens configured\n\nTaken inspiration (and some code) from https://github.com/tenxstudio/devtale and https://github.com/irthomasthomas/llm-cartographer\n\nThe tool uses threads to speed up the analysis, up to maximum 8 threads, only for file summarization.\n## Installation\n\nFirst, ensure you have `llm` installed:\n```bash\nuv tool install llm\n```\n\nThen, install the `llm-devtale` plugin:\n```bash\nllm install llm-devtale\n```\n\n## Usage\n\nOnce installed, the `devtale` command will be available through the `llm` CLI:\n\n```bash\nllm devtale [DIRECTORY] [OPTIONS]\n```\n\nBy default, `DIRECTORY` is the current working directory (`.`).\n\n## Examples\n\n### Generate documentation for the current directory\n\nThis will output the generated documentation to your console.\n```bash\nllm devtale .\n```\n\n### Output documentation to a file\n\nSave the generated documentation to `PROJECT_README.md`.\n```bash\nllm devtale . -o PROJECT_README.md\n```\n\n### Use a specific LLM model\n\nGenerate documentation using `gpt-4`:\n```bash\nllm devtale . -m gpt4\n```\n\n### Exclude specific files or folders\n\nExclude all files under `test/` directories and `docs/` folders using gitignore-style patterns:\n```bash\nllm devtale . -e \"**/test/*\" -e \"docs/\"\n```\n\n### Filter by file extension\n\nOnly include Python (`.py`) and JavaScript (`.js`) files in the analysis (do NOT forget the '\\*' before the extension):\n```bash\nllm devtale . -f *.py -f *.js\n```\n### Filter by directory\n\nOnly include folders src/app and src/utils:\n```bash\nllm devtale . -k src/app -k src/utils\n```\n\n### Limit token usage\n\nSpecify the maximum number of tokens to send to the LLM for the entire project and per file:\n```bash\nllm devtale . --max-tokens 50000 --max-tokens-per-file 5000\n```\n\n### Perform a dry run\n\nSee which files and folders would be analyzed without actually calling the LLM. This shows the project hierarchy and token counts.\n```bash\nllm devtale . --dry-run\n```\n\n### Add additional instructions to the prompt\n\nAdd additional instructions to the end of all LLM prompts.\n```bash\nllm devtale -p \"All summaries should be in uppercase\" .\n```\n\n\n## Options\n\n*   `DIRECTORY`: Path to the project directory (default: `.`)\n*   `-e, --exclude <PATTERN>`: Patterns to exclude files/folders (gitignore format). Can be used multiple times.\n*   `--max-tokens <INT>`: Maximum total tokens to send to the LLM for the entire project.\n*   `--max-tokens-per-file <INT>`: Maximum tokens to process per individual file.\n*   `-o, --output <PATH>`: Output file path or directory to save the generated documentation.\n*   `-m, --model <MODEL_NAME>`: Specify the LLM model to use (e.g., `gpt4`). If not set uses the default model configured in the llm cli tool\n*   `-f, --filter-extension <EXTENSION>`: Only include files with these extensions (e.g., `*.py`, `*.md`). Can be used multiple times.\n*   `-t, --dry-run`: Show the hierarchy and files that will be analyzed without making LLM calls.\n*   `-d, --debug`: Turn on verbose logging.\n*   `-k, --filter-folder`: Only parse the specified folder(s)\n*   `-p, --prompt`: Additional prompt to be added at the end of the program prompt\n\n## Debug\nThe program can be executed using an ad-hoc main.py file added for convenience:\n```\npython -m llm_devtale.main\n```\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "LLM plugin to create a hierarchical summary description of a project",
    "version": "0.2.3",
    "project_urls": {
        "Repository": "https://github.com/jvdiago/llm_devtale"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4ad0a667afec0f848fa62a9a07f589fc2e76b056db060dc63009a9e7e54382ff",
                "md5": "d3207275aaaf3628e20f30250b0823b0",
                "sha256": "d7f859fb3f96366345e8b0d8c5da87521aa4487774e6394caa79b66ec16aa017"
            },
            "downloads": -1,
            "filename": "llm_devtale-0.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d3207275aaaf3628e20f30250b0823b0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12",
            "size": 13812,
            "upload_time": "2025-07-23T09:41:57",
            "upload_time_iso_8601": "2025-07-23T09:41:57.749538Z",
            "url": "https://files.pythonhosted.org/packages/4a/d0/a667afec0f848fa62a9a07f589fc2e76b056db060dc63009a9e7e54382ff/llm_devtale-0.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "84e84650dabceca4dd0ed90c47158fb50f7a95fb9870be6f444866c1c1a4e082",
                "md5": "2cb8098dc4bb16440fd9893755782ff5",
                "sha256": "56f1f4c967018eb6926d877012d922b7c6a4b0c6e2f8bdd07b420eb059402639"
            },
            "downloads": -1,
            "filename": "llm_devtale-0.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "2cb8098dc4bb16440fd9893755782ff5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 10167,
            "upload_time": "2025-07-23T09:41:58",
            "upload_time_iso_8601": "2025-07-23T09:41:58.931523Z",
            "url": "https://files.pythonhosted.org/packages/84/e8/4650dabceca4dd0ed90c47158fb50f7a95fb9870be6f444866c1c1a4e082/llm_devtale-0.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-23 09:41:58",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jvdiago",
    "github_project": "llm_devtale",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llm-devtale"
}
        
Elapsed time: 0.80893s