# docpilot
> AI-powered documentation autopilot for Python projects
[](https://badge.fury.io/py/docpilot)
[](https://pypi.org/project/docpilot/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/psf/black)
**docpilot** automatically generates professional, comprehensive docstrings for your Python code using AI. Say goodbye to manual documentation and hello to intelligent, context-aware docstrings that follow your preferred style guide.
## Features
- **AI-Powered Generation**: Leverage GPT-4, Claude, or local LLMs (Ollama) to generate intelligent, context-aware docstrings
- **Multiple Docstring Styles**: Full support for Google, NumPy, and Sphinx docstring formats
- **Smart Code Analysis**: Understands your codebase structure, type hints, and complexity metrics
- **Production-Ready CLI**: Beautiful terminal UI with progress tracking and detailed reporting
- **Flexible Configuration**: Configure via TOML files, environment variables, or CLI arguments
- **Zero-Cost Option**: Use local LLMs (Ollama) for completely free operation
- **Batch Processing**: Process entire codebases with intelligent file discovery
- **Safe by Default**: Dry-run mode and diff preview before making changes
## Installation
### Basic Installation
```bash
pip install docpilot
```
### With Cloud LLM Support
```bash
# OpenAI and Anthropic support
pip install "docpilot[llm]"
```
### With Local LLM Support
```bash
# Ollama support for local, free LLM inference
pip install "docpilot[local]"
```
### Full Installation
```bash
# Everything including development tools
pip install "docpilot[all]"
```
## Quick Start
### 1. Initialize Configuration
```bash
docpilot init
```
This creates a `docpilot.toml` configuration file with sensible defaults.
### 2. Set Up Your LLM Provider
#### Option A: OpenAI
```bash
export OPENAI_API_KEY="your-api-key-here"
```
#### Option B: Anthropic (Claude)
```bash
export ANTHROPIC_API_KEY="your-api-key-here"
```
#### Option C: Local LLM (Free)
```bash
# Install Ollama from https://ollama.ai
ollama pull llama2
```
### 3. Generate Docstrings
```bash
# Generate for a single file
docpilot generate mymodule.py
# Generate for entire project
docpilot generate ./src --style google
# Preview changes without modifying files
docpilot generate ./src --dry-run --diff
```
## Usage Examples
### Analyze Code Structure
Examine your code without generating documentation:
```bash
docpilot analyze ./src --show-complexity --show-patterns
```
**Output:**
```
Analyzing ./src/myproject/utils.py...
✓ Found 15 elements (12 public, 3 private)
Functions:
├── calculate_total (line 10) - complexity: 3
├── validate_input (line 25) - complexity: 5
└── process_data (line 45) - complexity: 8
```
### Generate with Custom Configuration
```bash
docpilot generate ./src \
--provider anthropic \
--model claude-3-sonnet-20240229 \
--style numpy \
--include-private \
--overwrite
```
### Test LLM Connection
Verify your API credentials before processing:
```bash
docpilot test-connection --provider openai
```
## Real-World Example
**Before:**
```python
def calculate_compound_interest(principal, rate, time, frequency):
return principal * (1 + rate / frequency) ** (frequency * time)
```
**After (Google Style):**
```python
def calculate_compound_interest(principal, rate, time, frequency):
"""Calculate compound interest for a given principal amount.
This function computes the future value of an investment using the
compound interest formula: A = P(1 + r/n)^(nt), where interest is
compounded at regular intervals.
Args:
principal (float): The initial investment amount in dollars.
rate (float): Annual interest rate as a decimal (e.g., 0.05 for 5%).
time (float): Investment duration in years.
frequency (int): Number of times interest is compounded per year
(e.g., 12 for monthly, 4 for quarterly).
Returns:
float: The total amount after interest, including the principal.
Examples:
>>> calculate_compound_interest(1000, 0.05, 10, 12)
1647.01
>>> calculate_compound_interest(5000, 0.03, 5, 4)
5806.11
"""
return principal * (1 + rate / frequency) ** (frequency * time)
```
## Configuration
### Configuration File
Create `docpilot.toml` in your project root:
```toml
[docpilot]
# Docstring style: google, numpy, sphinx, or auto
style = "google"
# Overwrite existing docstrings
overwrite = false
# Include private elements (with leading underscore)
include_private = false
# Code analysis options
analyze_code = true
calculate_complexity = true
infer_types = true
detect_patterns = true
# Generation options
include_examples = true
max_line_length = 88
# File patterns
file_pattern = "**/*.py"
exclude_patterns = [
"**/test_*.py",
"**/*_test.py",
"**/tests/**",
"**/__pycache__/**",
]
# LLM settings
llm_provider = "openai"
llm_model = "gpt-3.5-turbo"
llm_temperature = 0.7
llm_max_tokens = 2000
llm_timeout = 30
# Project context (helps generate better docs)
project_name = "My Awesome Project"
project_description = "A Python library for awesome things"
```
### Environment Variables
All configuration can be set via environment variables with the `DOCPILOT_` prefix:
```bash
export DOCPILOT_STYLE="numpy"
export DOCPILOT_LLM_PROVIDER="anthropic"
export DOCPILOT_LLM_MODEL="claude-3-haiku-20240307"
export DOCPILOT_OVERWRITE="true"
```
### CLI Arguments
CLI arguments override both file and environment configuration:
```bash
docpilot generate ./src \
--style sphinx \
--provider local \
--model llama2 \
--overwrite
```
## Supported Docstring Styles
### Google Style (Default)
```python
def example(arg1, arg2):
"""Short description.
Longer description if needed.
Args:
arg1 (int): Description of arg1.
arg2 (str): Description of arg2.
Returns:
bool: Description of return value.
Raises:
ValueError: When validation fails.
"""
```
### NumPy Style
```python
def example(arg1, arg2):
"""Short description.
Longer description if needed.
Parameters
----------
arg1 : int
Description of arg1.
arg2 : str
Description of arg2.
Returns
-------
bool
Description of return value.
Raises
------
ValueError
When validation fails.
"""
```
### Sphinx Style
```python
def example(arg1, arg2):
"""Short description.
Longer description if needed.
:param arg1: Description of arg1.
:type arg1: int
:param arg2: Description of arg2.
:type arg2: str
:return: Description of return value.
:rtype: bool
:raises ValueError: When validation fails.
"""
```
## LLM Providers
### OpenAI
Best for: High-quality, consistent docstrings
```bash
# Supported models
docpilot generate ./src --provider openai --model gpt-4
docpilot generate ./src --provider openai --model gpt-3.5-turbo
```
### Anthropic (Claude)
Best for: Detailed explanations and complex code
```bash
# Supported models
docpilot generate ./src --provider anthropic --model claude-3-opus-20240229
docpilot generate ./src --provider anthropic --model claude-3-sonnet-20240229
docpilot generate ./src --provider anthropic --model claude-3-haiku-20240307
```
### Local (Ollama)
Best for: Privacy, cost-free operation, offline work
```bash
# First, pull a model
ollama pull llama2
ollama pull codellama
ollama pull mistral
# Then use it
docpilot generate ./src --provider local --model llama2
```
## Requirements
- **Python**: 3.9 or higher
- **Operating System**: Linux, macOS, Windows
- **Optional**: OpenAI API key, Anthropic API key, or Ollama installation
## Advanced Usage
### Using in CI/CD
```yaml
# .github/workflows/docs.yml
name: Generate Documentation
on: [push, pull_request]
jobs:
generate-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install docpilot
run: pip install "docpilot[llm]"
- name: Generate docstrings
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
docpilot generate ./src --dry-run
```
### Integration with pre-commit
```yaml
# .pre-commit-config.yaml
repos:
- repo: local
hooks:
- id: docpilot
name: Generate docstrings
entry: docpilot generate --dry-run
language: system
types: [python]
```
### Programmatic Usage
```python
from docpilot.core.generator import DocstringGenerator
from docpilot.llm.base import create_provider, LLMConfig, LLMProvider
from docpilot.core.models import DocstringStyle
# Configure LLM
config = LLMConfig(
provider=LLMProvider.OPENAI,
model="gpt-3.5-turbo",
api_key="your-api-key"
)
# Create generator
llm = create_provider(config)
generator = DocstringGenerator(llm_provider=llm)
# Generate docstrings
import asyncio
async def generate():
results = await generator.generate_for_file(
file_path="mymodule.py",
style=DocstringStyle.GOOGLE,
include_private=False,
overwrite_existing=False
)
for doc in results:
print(f"{doc.element_name}: {doc.docstring}")
asyncio.run(generate())
```
## Performance
docpilot is designed for production use with large codebases:
- **Parallel Processing**: Processes multiple files concurrently
- **Smart Caching**: Avoids redundant LLM calls
- **Rate Limiting**: Respects API rate limits automatically
- **Incremental Updates**: Only processes changed files
**Typical Performance:**
- Small project (50 functions): ~2-3 minutes with OpenAI
- Medium project (500 functions): ~20-30 minutes with OpenAI
- Large project (5000 functions): ~3-4 hours with OpenAI
- Local LLM: 2-3x slower but free
## Troubleshooting
### API Key Issues
```bash
# Verify API key is set
echo $OPENAI_API_KEY
# Test connection
docpilot test-connection --provider openai
```
### Rate Limiting
docpilot automatically handles rate limits, but you can adjust concurrency:
```toml
[docpilot]
llm_timeout = 60 # Increase timeout
llm_max_tokens = 1000 # Reduce token usage
```
### Quality Issues
If generated docstrings aren't meeting your standards:
1. **Try a better model**: Switch from `gpt-3.5-turbo` to `gpt-4`
2. **Provide context**: Set `project_name` and `project_description` in config
3. **Lower temperature**: Reduce `llm_temperature` to 0.3 for more focused output
4. **Add custom instructions**: Use `custom_instructions` in config
## Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
### Development Setup
```bash
# Clone repository
git clone https://github.com/yourusername/docpilot.git
cd docpilot
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest
# Run type checking
mypy src/docpilot
# Run linting
ruff check src/
black --check src/
```
## Roadmap
- [ ] Support for JavaScript/TypeScript
- [ ] Visual Studio Code extension
- [ ] Documentation website generation
- [ ] Custom LLM prompt templates
- [ ] Docstring quality scoring
- [ ] Automated documentation updates on code changes
- [ ] Integration with Sphinx/MkDocs
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
- Built with [Click](https://click.palletsprojects.com/) for CLI
- Powered by [OpenAI](https://openai.com/), [Anthropic](https://www.anthropic.com/), and [Ollama](https://ollama.ai/)
- Terminal UI by [Rich](https://rich.readthedocs.io/)
## Support
- **Issues**: [GitHub Issues](https://github.com/yourusername/docpilot/issues)
- **Discussions**: [GitHub Discussions](https://github.com/yourusername/docpilot/discussions)
- **Documentation**: [Read the Docs](https://docpilot.readthedocs.io)
---
If docpilot saves you time, please consider giving it a star on GitHub!
Raw data
{
"_id": null,
"home_page": null,
"name": "docpilot",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "ai, automation, code-generation, developer-tools, docstring, documentation, llm, python",
"author": "docpilot contributors",
"author_email": null,
"download_url": null,
"platform": null,
"description": "# docpilot\n\n> AI-powered documentation autopilot for Python projects\n\n[](https://badge.fury.io/py/docpilot)\n[](https://pypi.org/project/docpilot/)\n[](https://opensource.org/licenses/MIT)\n[](https://github.com/psf/black)\n\n**docpilot** automatically generates professional, comprehensive docstrings for your Python code using AI. Say goodbye to manual documentation and hello to intelligent, context-aware docstrings that follow your preferred style guide.\n\n## Features\n\n- **AI-Powered Generation**: Leverage GPT-4, Claude, or local LLMs (Ollama) to generate intelligent, context-aware docstrings\n- **Multiple Docstring Styles**: Full support for Google, NumPy, and Sphinx docstring formats\n- **Smart Code Analysis**: Understands your codebase structure, type hints, and complexity metrics\n- **Production-Ready CLI**: Beautiful terminal UI with progress tracking and detailed reporting\n- **Flexible Configuration**: Configure via TOML files, environment variables, or CLI arguments\n- **Zero-Cost Option**: Use local LLMs (Ollama) for completely free operation\n- **Batch Processing**: Process entire codebases with intelligent file discovery\n- **Safe by Default**: Dry-run mode and diff preview before making changes\n\n## Installation\n\n### Basic Installation\n\n```bash\npip install docpilot\n```\n\n### With Cloud LLM Support\n\n```bash\n# OpenAI and Anthropic support\npip install \"docpilot[llm]\"\n```\n\n### With Local LLM Support\n\n```bash\n# Ollama support for local, free LLM inference\npip install \"docpilot[local]\"\n```\n\n### Full Installation\n\n```bash\n# Everything including development tools\npip install \"docpilot[all]\"\n```\n\n## Quick Start\n\n### 1. Initialize Configuration\n\n```bash\ndocpilot init\n```\n\nThis creates a `docpilot.toml` configuration file with sensible defaults.\n\n### 2. Set Up Your LLM Provider\n\n#### Option A: OpenAI\n\n```bash\nexport OPENAI_API_KEY=\"your-api-key-here\"\n```\n\n#### Option B: Anthropic (Claude)\n\n```bash\nexport ANTHROPIC_API_KEY=\"your-api-key-here\"\n```\n\n#### Option C: Local LLM (Free)\n\n```bash\n# Install Ollama from https://ollama.ai\nollama pull llama2\n```\n\n### 3. Generate Docstrings\n\n```bash\n# Generate for a single file\ndocpilot generate mymodule.py\n\n# Generate for entire project\ndocpilot generate ./src --style google\n\n# Preview changes without modifying files\ndocpilot generate ./src --dry-run --diff\n```\n\n## Usage Examples\n\n### Analyze Code Structure\n\nExamine your code without generating documentation:\n\n```bash\ndocpilot analyze ./src --show-complexity --show-patterns\n```\n\n**Output:**\n```\nAnalyzing ./src/myproject/utils.py...\n\u2713 Found 15 elements (12 public, 3 private)\n\nFunctions:\n\u251c\u2500\u2500 calculate_total (line 10) - complexity: 3\n\u251c\u2500\u2500 validate_input (line 25) - complexity: 5\n\u2514\u2500\u2500 process_data (line 45) - complexity: 8\n```\n\n### Generate with Custom Configuration\n\n```bash\ndocpilot generate ./src \\\n --provider anthropic \\\n --model claude-3-sonnet-20240229 \\\n --style numpy \\\n --include-private \\\n --overwrite\n```\n\n### Test LLM Connection\n\nVerify your API credentials before processing:\n\n```bash\ndocpilot test-connection --provider openai\n```\n\n## Real-World Example\n\n**Before:**\n```python\ndef calculate_compound_interest(principal, rate, time, frequency):\n return principal * (1 + rate / frequency) ** (frequency * time)\n```\n\n**After (Google Style):**\n```python\ndef calculate_compound_interest(principal, rate, time, frequency):\n \"\"\"Calculate compound interest for a given principal amount.\n\n This function computes the future value of an investment using the\n compound interest formula: A = P(1 + r/n)^(nt), where interest is\n compounded at regular intervals.\n\n Args:\n principal (float): The initial investment amount in dollars.\n rate (float): Annual interest rate as a decimal (e.g., 0.05 for 5%).\n time (float): Investment duration in years.\n frequency (int): Number of times interest is compounded per year\n (e.g., 12 for monthly, 4 for quarterly).\n\n Returns:\n float: The total amount after interest, including the principal.\n\n Examples:\n >>> calculate_compound_interest(1000, 0.05, 10, 12)\n 1647.01\n\n >>> calculate_compound_interest(5000, 0.03, 5, 4)\n 5806.11\n \"\"\"\n return principal * (1 + rate / frequency) ** (frequency * time)\n```\n\n## Configuration\n\n### Configuration File\n\nCreate `docpilot.toml` in your project root:\n\n```toml\n[docpilot]\n# Docstring style: google, numpy, sphinx, or auto\nstyle = \"google\"\n\n# Overwrite existing docstrings\noverwrite = false\n\n# Include private elements (with leading underscore)\ninclude_private = false\n\n# Code analysis options\nanalyze_code = true\ncalculate_complexity = true\ninfer_types = true\ndetect_patterns = true\n\n# Generation options\ninclude_examples = true\nmax_line_length = 88\n\n# File patterns\nfile_pattern = \"**/*.py\"\nexclude_patterns = [\n \"**/test_*.py\",\n \"**/*_test.py\",\n \"**/tests/**\",\n \"**/__pycache__/**\",\n]\n\n# LLM settings\nllm_provider = \"openai\"\nllm_model = \"gpt-3.5-turbo\"\nllm_temperature = 0.7\nllm_max_tokens = 2000\nllm_timeout = 30\n\n# Project context (helps generate better docs)\nproject_name = \"My Awesome Project\"\nproject_description = \"A Python library for awesome things\"\n```\n\n### Environment Variables\n\nAll configuration can be set via environment variables with the `DOCPILOT_` prefix:\n\n```bash\nexport DOCPILOT_STYLE=\"numpy\"\nexport DOCPILOT_LLM_PROVIDER=\"anthropic\"\nexport DOCPILOT_LLM_MODEL=\"claude-3-haiku-20240307\"\nexport DOCPILOT_OVERWRITE=\"true\"\n```\n\n### CLI Arguments\n\nCLI arguments override both file and environment configuration:\n\n```bash\ndocpilot generate ./src \\\n --style sphinx \\\n --provider local \\\n --model llama2 \\\n --overwrite\n```\n\n## Supported Docstring Styles\n\n### Google Style (Default)\n\n```python\ndef example(arg1, arg2):\n \"\"\"Short description.\n\n Longer description if needed.\n\n Args:\n arg1 (int): Description of arg1.\n arg2 (str): Description of arg2.\n\n Returns:\n bool: Description of return value.\n\n Raises:\n ValueError: When validation fails.\n \"\"\"\n```\n\n### NumPy Style\n\n```python\ndef example(arg1, arg2):\n \"\"\"Short description.\n\n Longer description if needed.\n\n Parameters\n ----------\n arg1 : int\n Description of arg1.\n arg2 : str\n Description of arg2.\n\n Returns\n -------\n bool\n Description of return value.\n\n Raises\n ------\n ValueError\n When validation fails.\n \"\"\"\n```\n\n### Sphinx Style\n\n```python\ndef example(arg1, arg2):\n \"\"\"Short description.\n\n Longer description if needed.\n\n :param arg1: Description of arg1.\n :type arg1: int\n :param arg2: Description of arg2.\n :type arg2: str\n :return: Description of return value.\n :rtype: bool\n :raises ValueError: When validation fails.\n \"\"\"\n```\n\n## LLM Providers\n\n### OpenAI\n\nBest for: High-quality, consistent docstrings\n\n```bash\n# Supported models\ndocpilot generate ./src --provider openai --model gpt-4\ndocpilot generate ./src --provider openai --model gpt-3.5-turbo\n```\n\n### Anthropic (Claude)\n\nBest for: Detailed explanations and complex code\n\n```bash\n# Supported models\ndocpilot generate ./src --provider anthropic --model claude-3-opus-20240229\ndocpilot generate ./src --provider anthropic --model claude-3-sonnet-20240229\ndocpilot generate ./src --provider anthropic --model claude-3-haiku-20240307\n```\n\n### Local (Ollama)\n\nBest for: Privacy, cost-free operation, offline work\n\n```bash\n# First, pull a model\nollama pull llama2\nollama pull codellama\nollama pull mistral\n\n# Then use it\ndocpilot generate ./src --provider local --model llama2\n```\n\n## Requirements\n\n- **Python**: 3.9 or higher\n- **Operating System**: Linux, macOS, Windows\n- **Optional**: OpenAI API key, Anthropic API key, or Ollama installation\n\n## Advanced Usage\n\n### Using in CI/CD\n\n```yaml\n# .github/workflows/docs.yml\nname: Generate Documentation\n\non: [push, pull_request]\n\njobs:\n generate-docs:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v3\n\n - name: Set up Python\n uses: actions/setup-python@v4\n with:\n python-version: '3.11'\n\n - name: Install docpilot\n run: pip install \"docpilot[llm]\"\n\n - name: Generate docstrings\n env:\n OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}\n run: |\n docpilot generate ./src --dry-run\n```\n\n### Integration with pre-commit\n\n```yaml\n# .pre-commit-config.yaml\nrepos:\n - repo: local\n hooks:\n - id: docpilot\n name: Generate docstrings\n entry: docpilot generate --dry-run\n language: system\n types: [python]\n```\n\n### Programmatic Usage\n\n```python\nfrom docpilot.core.generator import DocstringGenerator\nfrom docpilot.llm.base import create_provider, LLMConfig, LLMProvider\nfrom docpilot.core.models import DocstringStyle\n\n# Configure LLM\nconfig = LLMConfig(\n provider=LLMProvider.OPENAI,\n model=\"gpt-3.5-turbo\",\n api_key=\"your-api-key\"\n)\n\n# Create generator\nllm = create_provider(config)\ngenerator = DocstringGenerator(llm_provider=llm)\n\n# Generate docstrings\nimport asyncio\n\nasync def generate():\n results = await generator.generate_for_file(\n file_path=\"mymodule.py\",\n style=DocstringStyle.GOOGLE,\n include_private=False,\n overwrite_existing=False\n )\n\n for doc in results:\n print(f\"{doc.element_name}: {doc.docstring}\")\n\nasyncio.run(generate())\n```\n\n## Performance\n\ndocpilot is designed for production use with large codebases:\n\n- **Parallel Processing**: Processes multiple files concurrently\n- **Smart Caching**: Avoids redundant LLM calls\n- **Rate Limiting**: Respects API rate limits automatically\n- **Incremental Updates**: Only processes changed files\n\n**Typical Performance:**\n- Small project (50 functions): ~2-3 minutes with OpenAI\n- Medium project (500 functions): ~20-30 minutes with OpenAI\n- Large project (5000 functions): ~3-4 hours with OpenAI\n- Local LLM: 2-3x slower but free\n\n## Troubleshooting\n\n### API Key Issues\n\n```bash\n# Verify API key is set\necho $OPENAI_API_KEY\n\n# Test connection\ndocpilot test-connection --provider openai\n```\n\n### Rate Limiting\n\ndocpilot automatically handles rate limits, but you can adjust concurrency:\n\n```toml\n[docpilot]\nllm_timeout = 60 # Increase timeout\nllm_max_tokens = 1000 # Reduce token usage\n```\n\n### Quality Issues\n\nIf generated docstrings aren't meeting your standards:\n\n1. **Try a better model**: Switch from `gpt-3.5-turbo` to `gpt-4`\n2. **Provide context**: Set `project_name` and `project_description` in config\n3. **Lower temperature**: Reduce `llm_temperature` to 0.3 for more focused output\n4. **Add custom instructions**: Use `custom_instructions` in config\n\n## Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n### Development Setup\n\n```bash\n# Clone repository\ngit clone https://github.com/yourusername/docpilot.git\ncd docpilot\n\n# Create virtual environment\npython -m venv .venv\nsource .venv/bin/activate # On Windows: .venv\\Scripts\\activate\n\n# Install in development mode\npip install -e \".[dev]\"\n\n# Run tests\npytest\n\n# Run type checking\nmypy src/docpilot\n\n# Run linting\nruff check src/\nblack --check src/\n```\n\n## Roadmap\n\n- [ ] Support for JavaScript/TypeScript\n- [ ] Visual Studio Code extension\n- [ ] Documentation website generation\n- [ ] Custom LLM prompt templates\n- [ ] Docstring quality scoring\n- [ ] Automated documentation updates on code changes\n- [ ] Integration with Sphinx/MkDocs\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Acknowledgments\n\n- Built with [Click](https://click.palletsprojects.com/) for CLI\n- Powered by [OpenAI](https://openai.com/), [Anthropic](https://www.anthropic.com/), and [Ollama](https://ollama.ai/)\n- Terminal UI by [Rich](https://rich.readthedocs.io/)\n\n## Support\n\n- **Issues**: [GitHub Issues](https://github.com/yourusername/docpilot/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/yourusername/docpilot/discussions)\n- **Documentation**: [Read the Docs](https://docpilot.readthedocs.io)\n\n---\n\nIf docpilot saves you time, please consider giving it a star on GitHub!\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "AI-powered docstring and documentation generator for Python projects",
"version": "0.2.0",
"project_urls": {
"Changelog": "https://github.com/0xV8/docpilot/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/0xV8/docpilot#readme",
"Homepage": "https://github.com/0xV8/docpilot",
"Issues": "https://github.com/0xV8/docpilot/issues",
"Repository": "https://github.com/0xV8/docpilot"
},
"split_keywords": [
"ai",
" automation",
" code-generation",
" developer-tools",
" docstring",
" documentation",
" llm",
" python"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ca01e3f8aed9a765a32473c925cea1b81e69f8e70aebc0e20be3fe295eff40c3",
"md5": "650b76366291e224fb49ca599ee3451c",
"sha256": "684eb72d358ad39acf3f7df583c07ee5f5040a12260e201add9d4d436171dc04"
},
"downloads": -1,
"filename": "docpilot-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "650b76366291e224fb49ca599ee3451c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 94616,
"upload_time": "2025-11-02T19:26:01",
"upload_time_iso_8601": "2025-11-02T19:26:01.145597Z",
"url": "https://files.pythonhosted.org/packages/ca/01/e3f8aed9a765a32473c925cea1b81e69f8e70aebc0e20be3fe295eff40c3/docpilot-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-11-02 19:26:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "0xV8",
"github_project": "docpilot",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "docpilot"
}