commitlm


Namecommitlm JSON
Version 1.0.8 PyPI version JSON
download
home_pageNone
SummaryAI-powered Git Documentation & Commit Messages
upload_time2025-10-14 21:39:44
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseNone
keywords git commit documentation ai llm automation developer-tools
VCS
bugtrack_url
requirements gitpython transformers torch accelerate optimum bitsandbytes click rich InquirerPy pydantic python-dotenv jinja2 google-genai google-api-core anthropic openai pytest pytest-cov black ruff watchdog
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # CommitLM — AI-powered Git Documentation & Commit Messages

[![PyPI version](https://img.shields.io/pypi/v/commitlm.svg)](https://pypi.org/project/commitlm/)
[![Python Versions](https://img.shields.io/pypi/pyversions/commitlm.svg)](https://pypi.org/project/commitlm/)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)

**Automated Documentation and Commit Message Generation for Every Git Commit**

CommitLM is an AI-native tool that automatically generates comprehensive documentation for your code changes and creates conventional commit messages. It integrates seamlessly with Git through hooks to analyze your changes and provide intelligent documentation and commit messages, streamlining your workflow and improving your project's maintainability.

## Why CommitLM?

- 🚀 **Save Time**: Eliminate manual documentation and commit message writing
- 📝 **Maintain Quality**: Consistent, professional documentation for every commit
- 🤖 **Flexible AI**: Choose from multiple LLM providers or run models locally
- ⚡ **Zero Friction**: Works automatically via Git hooks - no workflow changes needed
- 🔒 **Privacy First**: Run local models for complete data privacy
- 💰 **Cost Effective**: Free local models or affordable cloud APIs

## Table of Contents

- [Features](#features)
- [Quick Start](#quick-start)
- [System Requirements](#system-requirements)
- [Configuration](#configuration)
- [Hardware Support](#hardware-support-local-models)
- [Usage Examples](#usage-examples)
- [Commands](#commands)
- [Troubleshooting](#troubleshooting)
- [Contributing](#contributing)
- [License](#license)

## Features

### Core Capabilities
- **📝 Automatic Commit Messages**: AI-generated conventional commit messages via `prepare-commit-msg` hook
- **📚 Automatic Documentation**: Comprehensive docs generated after every commit via `post-commit` hook
- **🎯 Task-Specific Models**: Use different models for commit messages vs documentation generation
- **📁 Organized Documentation**: All docs saved in `docs/` folder with timestamps and commit hashes

### Multi-Provider Support
- **☁️ Cloud APIs**: Google Gemini, Anthropic Claude, OpenAI GPT support
- **🏠 Local Models**: HuggingFace models (Qwen2.5-Coder, Phi-3, TinyLlama) - no API keys required
- **🔄 Fallback Options**: Configure fallback to local models if API fails
- **⚙️ Flexible Configuration**: Mix and match providers for different tasks

### Performance & Optimization
- **⚡ GPU/CPU Auto-detection**: Automatically uses NVIDIA GPU, Apple Silicon, or CPU
- **💾 Memory Optimization**: Toggleable 8-bit quantization for systems with limited RAM
- **🎯 Extended Context**: YaRN support for Qwen models (up to 131K tokens)

## Quick Start

### 1. Install

```bash
pip install commitlm
```

### 2. Initialize Configuration

```bash
# Interactive setup (recommended) - guides you through provider, model, and task selection
commitlm init

# Setup with specific provider and model
commitlm init --provider gemini --model gemini-2.0-flash-exp
commitlm init --provider anthropic --model claude-3-5-haiku-latest
commitlm init --provider openai --model gpt-4o-mini
commitlm init --provider huggingface --model qwen2.5-coder-1.5b
```

#### Interactive Setup Flow

When you run `commitlm init`, you'll be guided through:

1. **Provider Selection**: Choose between local (HuggingFace) or cloud (Gemini, Anthropic, OpenAI)
2. **Model Selection**: Pick from provider-specific models
3. **Task Configuration**: Enable commit messages, documentation, or both
4. **Task-Specific Models** (optional): Use different models for different tasks
5. **Fallback Configuration**: Set up fallback to local models if API fails

Example interactive session:
```
? Select LLM provider › gemini
? Select model › gemini-2.0-flash-exp
? Which tasks do you want to enable? › both
? Do you want to use different models for specific tasks? › Yes
  ? Select provider for commit_message › huggingface
  ? Select model › qwen2.5-coder-1.5b
? Enable fallback to a local model if the API fails? › Yes
```

#### Provider Options

**Local Models (HuggingFace)** - No API keys required:
- `qwen2.5-coder-1.5b` - **Recommended** - Best performance/speed ratio, YaRN support (1.5B params)
- `phi-3-mini-128k` - Long context (128K tokens), excellent for large diffs (3.8B params)
- `tinyllama` - Minimal resource usage (1.1B params)

**Cloud APIs** - Faster, more capable:
- **Gemini**: `gemini-2.0-flash-exp`, `gemini-1.5-pro`, `gemini-1.5-flash` (requires `GEMINI_API_KEY`)
- **Anthropic**: `claude-3-5-sonnet-latest`, `claude-3-5-haiku-latest` (requires `ANTHROPIC_API_KEY`)
- **OpenAI**: `gpt-4o`, `gpt-4o-mini` (requires `OPENAI_API_KEY`)

### 3. Install Git Hooks

CommitLM provides two powerful git hooks:

```bash
# Install both hooks (recommended)
commitlm install-hook

# Install only commit message generation
commitlm install-hook message

# Install only documentation generation
commitlm install-hook docs
```

**What each hook does**:

**`prepare-commit-msg` hook** (Commit Messages):
1. Runs before commit editor opens
2. Analyzes staged changes (`git diff --cached`)
3. Generates conventional commit message
4. Pre-fills commit message in editor

**`post-commit` hook** (Documentation):
1. Runs after commit completes
2. Extracts commit diff
3. Generates comprehensive documentation
4. Saves to `docs/commit_<hash>_<timestamp>.md`

Example workflow:
```bash
# Make your code changes
git add .

# Option 1: Use hook to generate message
git commit
# Editor opens with AI-generated message pre-filled
# Edit if needed, save and close

# Option 2: Use git alias (see below)
git c  # Stages, generates message, commits in one step

# Documentation is automatically generated after commit completes
# docs/commit_abc1234_2025-01-15_14-30-25.md
```

**Example Generated Commit Message:**
```
feat(auth): add OAuth2 authentication support

Implemented OAuth2 authentication flow with support for Google and GitHub providers.
Added token refresh mechanism and secure session management.
```

**Example Generated Documentation:**
```markdown
# Commit Documentation

## Summary
Added OAuth2 authentication support with Google and GitHub providers, implementing
secure token management and session handling.

## Changes Made
- Implemented OAuth2 authentication flow
- Added GoogleAuthProvider and GitHubAuthProvider classes
- Created TokenRefreshService for automatic token renewal
- Added secure session storage with encryption

## Technical Impact
- New dependencies: oauth2-client, jose
- Database migration required for user_tokens table
- Environment variables needed: GOOGLE_CLIENT_ID, GITHUB_CLIENT_ID

## Usage Example
\`\`\`python
from auth import OAuth2Manager

manager = OAuth2Manager(provider='google')
auth_url = manager.get_authorization_url()
\`\`\`
```

#### Alternative: Git Alias Workflow

Set up a convenient git alias for one-command commits:

```bash
commitlm set-alias
# Creates 'git c' alias (or custom name)

# Now use it:
git add .
git c  # Automatically generates message and commits
```

### 4. Validate Setup

```bash
# View configuration and hardware info
commitlm status
```

## System Requirements

### Minimum Requirements
- Python 3.9+
- 4GB RAM (with memory optimization enabled)
- 2GB disk space (for model downloads)

### Recommended Requirements  
- Python 3.10+
- 8GB+ RAM
- NVIDIA GPU with 4GB+ VRAM (optional, auto-detected)
- SSD storage

## Configuration

### Environment Variables

Set API keys for cloud providers:

```bash
# In your shell profile (~/.bashrc, ~/.zshrc, etc.)
export GEMINI_API_KEY="your-gemini-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export OPENAI_API_KEY="your-openai-api-key"
```

**Where to get API keys:**
- **Gemini**: [Google AI Studio](https://makersuite.google.com/app/apikey) - Free tier available
- **Anthropic**: [Anthropic Console](https://console.anthropic.com/) - Pay-as-you-go pricing
- **OpenAI**: [OpenAI Platform](https://platform.openai.com/api-keys) - Pay-as-you-go pricing

### Task-Specific Models

Use different models for different tasks:

```bash
# Enable task-specific models during init
commitlm init
# Select "Yes" when prompted "Do you want to use different models for specific tasks?"

# Or configure later
commitlm enable-task

# Change model for specific task
commitlm config change-model commit_message
commitlm config change-model doc_generation
```

**Example use case**: Use fast local model (Qwen) for commit messages, powerful cloud API (Claude) for documentation.

### Configuration File

Configuration is stored in `.commitlm-config.json` at your git repository root:

```json
{
  "provider": "gemini",
  "model": "gemini-2.0-flash-exp",
  "commit_message_enabled": true,
  "doc_generation_enabled": true,
  "commit_message": {
    "provider": "huggingface",
    "model": "qwen2.5-coder-1.5b"
  },
  "doc_generation": {
    "provider": "gemini",
    "model": "gemini-1.5-pro"
  },
  "fallback_to_local": true
}
```

## Hardware Support (Local Models)

When using HuggingFace local models, the tool automatically detects and uses the best available hardware:

1. **NVIDIA GPU** (CUDA) - Uses GPU acceleration with `device_map="auto"`
2. **Apple Silicon** (MPS) - Uses Apple's Metal Performance Shaders
3. **CPU** - Falls back to optimized CPU inference

### Memory Optimization

Memory optimization is **enabled by default** for local models and includes:
- 8-bit quantization (reduces memory by ~50%)
- float16 precision
- Automatic model sharding

Disable for better quality (requires more RAM):
```bash
commitlm init --provider huggingface --no-memory-optimization
```

## Usage Examples

### Using Commit Message Hook

```bash
# Make changes
echo "def new_feature(): pass" >> src/app.py
git add .

# Commit without message - hook generates it
git commit
# Editor opens with pre-filled message:
# feat(app): add new feature function

# Review, edit if needed, save and close
```

### Using Git Alias

```bash
# Set up alias once
commitlm set-alias

# Use it for every commit
git add .
git c  # Generates message and commits automatically
```

### Using Documentation Hook

After installing the `post-commit` hook:

```bash
# Make changes
echo "console.log('new feature')" >> src/app.js
git add .
git commit -m "feat: add logging feature"

# Documentation automatically generated at:
# docs/commit_a1b2c3d_2025-01-15_14-30-25.md
```

### Manual Generation (Testing/Debugging)

```bash
# Test documentation generation with sample diff
commitlm generate "fix: resolve memory leak
- Fixed session cleanup
- Added event listener removal"

# Test commit message generation
echo "function test() {}" > test.js
git add test.js
commitlm generate --short-message

# Use specific provider/model for testing
commitlm generate --provider gemini --model gemini-2.0-flash-exp "your diff here"
```

### Advanced: YaRN Extended Context (Local Models)

For HuggingFace Qwen models, YaRN enables extended context lengths:

```bash
# Enable YaRN during initialization
commitlm init --provider huggingface --model qwen2.5-coder-1.5b --enable-yarn

# YaRN with memory optimization (64K context)
commitlm init --provider huggingface --model qwen2.5-coder-1.5b --enable-yarn --memory-optimization

# YaRN with full performance (131K context)
commitlm init --provider huggingface --model qwen2.5-coder-1.5b --enable-yarn --no-memory-optimization
```

**YaRN Benefits:**
- Extended context up to 131K tokens (vs 32K default)
- Better handling of large git diffs without truncation
- Automatic scaling based on memory optimization settings

## Commands

### Primary Commands

| Command | Description |
| --- | --- |
| `commitlm init` | Initializes the project with an interactive setup guide. |
| `commitlm install-hook` | Installs the Git hooks for automation. |
| `commitlm status` | Shows the current configuration and hardware status. |
| `commitlm validate` | Validates the configuration and tests the LLM connection. |

### Secondary Commands

| Command | Description |
| --- | --- |
| `commitlm generate` | Manually generate a commit message or documentation. |
| `commitlm uninstall-hook` | Removes the Git hooks. |
| `commitlm set-alias` | Sets up a Git alias for easier commit message generation. |
| `commitlm config get [KEY]` | Gets a configuration value. |
| `commitlm config set <KEY> <VALUE>` | Sets a configuration value. |
| `commitlm config change-model <TASK>` | Changes the model for a specific task. |
| `commitlm enable-task` | Enables or disables tasks. |

## Troubleshooting

### API Key Issues
```bash
# Verify environment variables are set
echo $GEMINI_API_KEY
echo $ANTHROPIC_API_KEY
echo $OPENAI_API_KEY

# Add to shell profile if missing
export GEMINI_API_KEY="your-key-here"
```

### Model Download Issues (Local Models)
Models are downloaded automatically on first use to `~/.cache/huggingface/`. Ensure you have internet connection and sufficient disk space.

### Memory Errors (Local Models)
```bash
# Enable memory optimization (default)
commitlm init --provider huggingface --memory-optimization

# Try a smaller model
commitlm init --provider huggingface --model tinyllama

# Or switch to cloud API
commitlm init --provider gemini
```

### Performance Issues (Local Models)
```bash
# Check hardware detection
commitlm status

# Disable memory optimization for better quality
commitlm init --provider huggingface --no-memory-optimization

# Switch to cloud API for faster generation
commitlm config change-model default
# Select cloud provider (Gemini/Anthropic/OpenAI)
```

### Hook Not Working
```bash
# Verify hooks are installed
ls -la .git/hooks/

# Reinstall hooks
commitlm install-hook --force

# Check which tasks are enabled
commitlm config get commit_message_enabled
commitlm config get doc_generation_enabled

# Enable/disable tasks
commitlm enable-task
```

### CUDA/GPU Issues (Local Models)
```bash
# Check GPU detection
commitlm status

# Force CPU usage if GPU causes issues
# Edit .commitlm-config.json and set "device": "cpu"
```

### Git Hook Conflicts
If you have existing `prepare-commit-msg` or `post-commit` hooks:
```bash
# Backup existing hooks
cp .git/hooks/prepare-commit-msg .git/hooks/prepare-commit-msg.backup
cp .git/hooks/post-commit .git/hooks/post-commit.backup

# Install CommitLM hooks
commitlm install-hook

# Manually merge if needed by editing .git/hooks/prepare-commit-msg or .git/hooks/post-commit
```

### Configuration Not Found
```bash
# Ensure you're in a git repository
git status

# Reinitialize configuration
commitlm init
```

## Contributing

We welcome contributions! Here's how you can help:

### Reporting Issues
- Check [existing issues](https://github.com/LeeSinLiang/commitLM/issues) first
- Provide clear reproduction steps
- Include system info from `commitlm status`

### Feature Requests
- Open an issue with the `enhancement` label
- Describe the use case and expected behavior

### Development Setup
```bash
# Clone the repository
git clone https://github.com/LeeSinLiang/commitLM.git
cd commitLM

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install in editable mode with dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Run linting
black commitlm/
ruff check commitlm/
```

### Pull Requests
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes
4. Add tests for new functionality
5. Ensure all tests pass (`pytest`)
6. Run linters (`black .` and `ruff check .`)
7. Commit your changes (use CommitLM for commit messages!)
8. Push to your fork
9. Open a Pull Request

## License

CommitLM is licensed under the **Apache License 2.0**. See [LICENSE](LICENSE) for full details.
See [NOTICE](NOTICE) file for third-party attributions.

## Support

- **Issues**: [GitHub Issues](https://github.com/LeeSinLiang/commitLM/issues)
- **Discussions**: [GitHub Discussions](https://github.com/LeeSinLiang/commitLM/discussions)
- **PyPI**: [https://pypi.org/project/commitlm/](https://pypi.org/project/commitlm/)

---

*If CommitLM saves you time, consider giving it a ⭐ on GitHub!*

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "commitlm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "git, commit, documentation, ai, llm, automation, developer-tools",
    "author": null,
    "author_email": "Sin Liang Lee <sinlianglee110@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/d7/eb/b118bf9f29636db5ae4058972e36eeb990a05c163228cb5b5a3f16ad88f4/commitlm-1.0.8.tar.gz",
    "platform": null,
    "description": "# CommitLM \u2014 AI-powered Git Documentation & Commit Messages\n\n[![PyPI version](https://img.shields.io/pypi/v/commitlm.svg)](https://pypi.org/project/commitlm/)\n[![Python Versions](https://img.shields.io/pypi/pyversions/commitlm.svg)](https://pypi.org/project/commitlm/)\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n\n**Automated Documentation and Commit Message Generation for Every Git Commit**\n\nCommitLM is an AI-native tool that automatically generates comprehensive documentation for your code changes and creates conventional commit messages. It integrates seamlessly with Git through hooks to analyze your changes and provide intelligent documentation and commit messages, streamlining your workflow and improving your project's maintainability.\n\n## Why CommitLM?\n\n- \ud83d\ude80 **Save Time**: Eliminate manual documentation and commit message writing\n- \ud83d\udcdd **Maintain Quality**: Consistent, professional documentation for every commit\n- \ud83e\udd16 **Flexible AI**: Choose from multiple LLM providers or run models locally\n- \u26a1 **Zero Friction**: Works automatically via Git hooks - no workflow changes needed\n- \ud83d\udd12 **Privacy First**: Run local models for complete data privacy\n- \ud83d\udcb0 **Cost Effective**: Free local models or affordable cloud APIs\n\n## Table of Contents\n\n- [Features](#features)\n- [Quick Start](#quick-start)\n- [System Requirements](#system-requirements)\n- [Configuration](#configuration)\n- [Hardware Support](#hardware-support-local-models)\n- [Usage Examples](#usage-examples)\n- [Commands](#commands)\n- [Troubleshooting](#troubleshooting)\n- [Contributing](#contributing)\n- [License](#license)\n\n## Features\n\n### Core Capabilities\n- **\ud83d\udcdd Automatic Commit Messages**: AI-generated conventional commit messages via `prepare-commit-msg` hook\n- **\ud83d\udcda Automatic Documentation**: Comprehensive docs generated after every commit via `post-commit` hook\n- **\ud83c\udfaf Task-Specific Models**: Use different models for commit messages vs documentation generation\n- **\ud83d\udcc1 Organized Documentation**: All docs saved in `docs/` folder with timestamps and commit hashes\n\n### Multi-Provider Support\n- **\u2601\ufe0f Cloud APIs**: Google Gemini, Anthropic Claude, OpenAI GPT support\n- **\ud83c\udfe0 Local Models**: HuggingFace models (Qwen2.5-Coder, Phi-3, TinyLlama) - no API keys required\n- **\ud83d\udd04 Fallback Options**: Configure fallback to local models if API fails\n- **\u2699\ufe0f Flexible Configuration**: Mix and match providers for different tasks\n\n### Performance & Optimization\n- **\u26a1 GPU/CPU Auto-detection**: Automatically uses NVIDIA GPU, Apple Silicon, or CPU\n- **\ud83d\udcbe Memory Optimization**: Toggleable 8-bit quantization for systems with limited RAM\n- **\ud83c\udfaf Extended Context**: YaRN support for Qwen models (up to 131K tokens)\n\n## Quick Start\n\n### 1. Install\n\n```bash\npip install commitlm\n```\n\n### 2. Initialize Configuration\n\n```bash\n# Interactive setup (recommended) - guides you through provider, model, and task selection\ncommitlm init\n\n# Setup with specific provider and model\ncommitlm init --provider gemini --model gemini-2.0-flash-exp\ncommitlm init --provider anthropic --model claude-3-5-haiku-latest\ncommitlm init --provider openai --model gpt-4o-mini\ncommitlm init --provider huggingface --model qwen2.5-coder-1.5b\n```\n\n#### Interactive Setup Flow\n\nWhen you run `commitlm init`, you'll be guided through:\n\n1. **Provider Selection**: Choose between local (HuggingFace) or cloud (Gemini, Anthropic, OpenAI)\n2. **Model Selection**: Pick from provider-specific models\n3. **Task Configuration**: Enable commit messages, documentation, or both\n4. **Task-Specific Models** (optional): Use different models for different tasks\n5. **Fallback Configuration**: Set up fallback to local models if API fails\n\nExample interactive session:\n```\n? Select LLM provider \u203a gemini\n? Select model \u203a gemini-2.0-flash-exp\n? Which tasks do you want to enable? \u203a both\n? Do you want to use different models for specific tasks? \u203a Yes\n  ? Select provider for commit_message \u203a huggingface\n  ? Select model \u203a qwen2.5-coder-1.5b\n? Enable fallback to a local model if the API fails? \u203a Yes\n```\n\n#### Provider Options\n\n**Local Models (HuggingFace)** - No API keys required:\n- `qwen2.5-coder-1.5b` - **Recommended** - Best performance/speed ratio, YaRN support (1.5B params)\n- `phi-3-mini-128k` - Long context (128K tokens), excellent for large diffs (3.8B params)\n- `tinyllama` - Minimal resource usage (1.1B params)\n\n**Cloud APIs** - Faster, more capable:\n- **Gemini**: `gemini-2.0-flash-exp`, `gemini-1.5-pro`, `gemini-1.5-flash` (requires `GEMINI_API_KEY`)\n- **Anthropic**: `claude-3-5-sonnet-latest`, `claude-3-5-haiku-latest` (requires `ANTHROPIC_API_KEY`)\n- **OpenAI**: `gpt-4o`, `gpt-4o-mini` (requires `OPENAI_API_KEY`)\n\n### 3. Install Git Hooks\n\nCommitLM provides two powerful git hooks:\n\n```bash\n# Install both hooks (recommended)\ncommitlm install-hook\n\n# Install only commit message generation\ncommitlm install-hook message\n\n# Install only documentation generation\ncommitlm install-hook docs\n```\n\n**What each hook does**:\n\n**`prepare-commit-msg` hook** (Commit Messages):\n1. Runs before commit editor opens\n2. Analyzes staged changes (`git diff --cached`)\n3. Generates conventional commit message\n4. Pre-fills commit message in editor\n\n**`post-commit` hook** (Documentation):\n1. Runs after commit completes\n2. Extracts commit diff\n3. Generates comprehensive documentation\n4. Saves to `docs/commit_<hash>_<timestamp>.md`\n\nExample workflow:\n```bash\n# Make your code changes\ngit add .\n\n# Option 1: Use hook to generate message\ngit commit\n# Editor opens with AI-generated message pre-filled\n# Edit if needed, save and close\n\n# Option 2: Use git alias (see below)\ngit c  # Stages, generates message, commits in one step\n\n# Documentation is automatically generated after commit completes\n# docs/commit_abc1234_2025-01-15_14-30-25.md\n```\n\n**Example Generated Commit Message:**\n```\nfeat(auth): add OAuth2 authentication support\n\nImplemented OAuth2 authentication flow with support for Google and GitHub providers.\nAdded token refresh mechanism and secure session management.\n```\n\n**Example Generated Documentation:**\n```markdown\n# Commit Documentation\n\n## Summary\nAdded OAuth2 authentication support with Google and GitHub providers, implementing\nsecure token management and session handling.\n\n## Changes Made\n- Implemented OAuth2 authentication flow\n- Added GoogleAuthProvider and GitHubAuthProvider classes\n- Created TokenRefreshService for automatic token renewal\n- Added secure session storage with encryption\n\n## Technical Impact\n- New dependencies: oauth2-client, jose\n- Database migration required for user_tokens table\n- Environment variables needed: GOOGLE_CLIENT_ID, GITHUB_CLIENT_ID\n\n## Usage Example\n\\`\\`\\`python\nfrom auth import OAuth2Manager\n\nmanager = OAuth2Manager(provider='google')\nauth_url = manager.get_authorization_url()\n\\`\\`\\`\n```\n\n#### Alternative: Git Alias Workflow\n\nSet up a convenient git alias for one-command commits:\n\n```bash\ncommitlm set-alias\n# Creates 'git c' alias (or custom name)\n\n# Now use it:\ngit add .\ngit c  # Automatically generates message and commits\n```\n\n### 4. Validate Setup\n\n```bash\n# View configuration and hardware info\ncommitlm status\n```\n\n## System Requirements\n\n### Minimum Requirements\n- Python 3.9+\n- 4GB RAM (with memory optimization enabled)\n- 2GB disk space (for model downloads)\n\n### Recommended Requirements  \n- Python 3.10+\n- 8GB+ RAM\n- NVIDIA GPU with 4GB+ VRAM (optional, auto-detected)\n- SSD storage\n\n## Configuration\n\n### Environment Variables\n\nSet API keys for cloud providers:\n\n```bash\n# In your shell profile (~/.bashrc, ~/.zshrc, etc.)\nexport GEMINI_API_KEY=\"your-gemini-api-key\"\nexport ANTHROPIC_API_KEY=\"your-anthropic-api-key\"\nexport OPENAI_API_KEY=\"your-openai-api-key\"\n```\n\n**Where to get API keys:**\n- **Gemini**: [Google AI Studio](https://makersuite.google.com/app/apikey) - Free tier available\n- **Anthropic**: [Anthropic Console](https://console.anthropic.com/) - Pay-as-you-go pricing\n- **OpenAI**: [OpenAI Platform](https://platform.openai.com/api-keys) - Pay-as-you-go pricing\n\n### Task-Specific Models\n\nUse different models for different tasks:\n\n```bash\n# Enable task-specific models during init\ncommitlm init\n# Select \"Yes\" when prompted \"Do you want to use different models for specific tasks?\"\n\n# Or configure later\ncommitlm enable-task\n\n# Change model for specific task\ncommitlm config change-model commit_message\ncommitlm config change-model doc_generation\n```\n\n**Example use case**: Use fast local model (Qwen) for commit messages, powerful cloud API (Claude) for documentation.\n\n### Configuration File\n\nConfiguration is stored in `.commitlm-config.json` at your git repository root:\n\n```json\n{\n  \"provider\": \"gemini\",\n  \"model\": \"gemini-2.0-flash-exp\",\n  \"commit_message_enabled\": true,\n  \"doc_generation_enabled\": true,\n  \"commit_message\": {\n    \"provider\": \"huggingface\",\n    \"model\": \"qwen2.5-coder-1.5b\"\n  },\n  \"doc_generation\": {\n    \"provider\": \"gemini\",\n    \"model\": \"gemini-1.5-pro\"\n  },\n  \"fallback_to_local\": true\n}\n```\n\n## Hardware Support (Local Models)\n\nWhen using HuggingFace local models, the tool automatically detects and uses the best available hardware:\n\n1. **NVIDIA GPU** (CUDA) - Uses GPU acceleration with `device_map=\"auto\"`\n2. **Apple Silicon** (MPS) - Uses Apple's Metal Performance Shaders\n3. **CPU** - Falls back to optimized CPU inference\n\n### Memory Optimization\n\nMemory optimization is **enabled by default** for local models and includes:\n- 8-bit quantization (reduces memory by ~50%)\n- float16 precision\n- Automatic model sharding\n\nDisable for better quality (requires more RAM):\n```bash\ncommitlm init --provider huggingface --no-memory-optimization\n```\n\n## Usage Examples\n\n### Using Commit Message Hook\n\n```bash\n# Make changes\necho \"def new_feature(): pass\" >> src/app.py\ngit add .\n\n# Commit without message - hook generates it\ngit commit\n# Editor opens with pre-filled message:\n# feat(app): add new feature function\n\n# Review, edit if needed, save and close\n```\n\n### Using Git Alias\n\n```bash\n# Set up alias once\ncommitlm set-alias\n\n# Use it for every commit\ngit add .\ngit c  # Generates message and commits automatically\n```\n\n### Using Documentation Hook\n\nAfter installing the `post-commit` hook:\n\n```bash\n# Make changes\necho \"console.log('new feature')\" >> src/app.js\ngit add .\ngit commit -m \"feat: add logging feature\"\n\n# Documentation automatically generated at:\n# docs/commit_a1b2c3d_2025-01-15_14-30-25.md\n```\n\n### Manual Generation (Testing/Debugging)\n\n```bash\n# Test documentation generation with sample diff\ncommitlm generate \"fix: resolve memory leak\n- Fixed session cleanup\n- Added event listener removal\"\n\n# Test commit message generation\necho \"function test() {}\" > test.js\ngit add test.js\ncommitlm generate --short-message\n\n# Use specific provider/model for testing\ncommitlm generate --provider gemini --model gemini-2.0-flash-exp \"your diff here\"\n```\n\n### Advanced: YaRN Extended Context (Local Models)\n\nFor HuggingFace Qwen models, YaRN enables extended context lengths:\n\n```bash\n# Enable YaRN during initialization\ncommitlm init --provider huggingface --model qwen2.5-coder-1.5b --enable-yarn\n\n# YaRN with memory optimization (64K context)\ncommitlm init --provider huggingface --model qwen2.5-coder-1.5b --enable-yarn --memory-optimization\n\n# YaRN with full performance (131K context)\ncommitlm init --provider huggingface --model qwen2.5-coder-1.5b --enable-yarn --no-memory-optimization\n```\n\n**YaRN Benefits:**\n- Extended context up to 131K tokens (vs 32K default)\n- Better handling of large git diffs without truncation\n- Automatic scaling based on memory optimization settings\n\n## Commands\n\n### Primary Commands\n\n| Command | Description |\n| --- | --- |\n| `commitlm init` | Initializes the project with an interactive setup guide. |\n| `commitlm install-hook` | Installs the Git hooks for automation. |\n| `commitlm status` | Shows the current configuration and hardware status. |\n| `commitlm validate` | Validates the configuration and tests the LLM connection. |\n\n### Secondary Commands\n\n| Command | Description |\n| --- | --- |\n| `commitlm generate` | Manually generate a commit message or documentation. |\n| `commitlm uninstall-hook` | Removes the Git hooks. |\n| `commitlm set-alias` | Sets up a Git alias for easier commit message generation. |\n| `commitlm config get [KEY]` | Gets a configuration value. |\n| `commitlm config set <KEY> <VALUE>` | Sets a configuration value. |\n| `commitlm config change-model <TASK>` | Changes the model for a specific task. |\n| `commitlm enable-task` | Enables or disables tasks. |\n\n## Troubleshooting\n\n### API Key Issues\n```bash\n# Verify environment variables are set\necho $GEMINI_API_KEY\necho $ANTHROPIC_API_KEY\necho $OPENAI_API_KEY\n\n# Add to shell profile if missing\nexport GEMINI_API_KEY=\"your-key-here\"\n```\n\n### Model Download Issues (Local Models)\nModels are downloaded automatically on first use to `~/.cache/huggingface/`. Ensure you have internet connection and sufficient disk space.\n\n### Memory Errors (Local Models)\n```bash\n# Enable memory optimization (default)\ncommitlm init --provider huggingface --memory-optimization\n\n# Try a smaller model\ncommitlm init --provider huggingface --model tinyllama\n\n# Or switch to cloud API\ncommitlm init --provider gemini\n```\n\n### Performance Issues (Local Models)\n```bash\n# Check hardware detection\ncommitlm status\n\n# Disable memory optimization for better quality\ncommitlm init --provider huggingface --no-memory-optimization\n\n# Switch to cloud API for faster generation\ncommitlm config change-model default\n# Select cloud provider (Gemini/Anthropic/OpenAI)\n```\n\n### Hook Not Working\n```bash\n# Verify hooks are installed\nls -la .git/hooks/\n\n# Reinstall hooks\ncommitlm install-hook --force\n\n# Check which tasks are enabled\ncommitlm config get commit_message_enabled\ncommitlm config get doc_generation_enabled\n\n# Enable/disable tasks\ncommitlm enable-task\n```\n\n### CUDA/GPU Issues (Local Models)\n```bash\n# Check GPU detection\ncommitlm status\n\n# Force CPU usage if GPU causes issues\n# Edit .commitlm-config.json and set \"device\": \"cpu\"\n```\n\n### Git Hook Conflicts\nIf you have existing `prepare-commit-msg` or `post-commit` hooks:\n```bash\n# Backup existing hooks\ncp .git/hooks/prepare-commit-msg .git/hooks/prepare-commit-msg.backup\ncp .git/hooks/post-commit .git/hooks/post-commit.backup\n\n# Install CommitLM hooks\ncommitlm install-hook\n\n# Manually merge if needed by editing .git/hooks/prepare-commit-msg or .git/hooks/post-commit\n```\n\n### Configuration Not Found\n```bash\n# Ensure you're in a git repository\ngit status\n\n# Reinitialize configuration\ncommitlm init\n```\n\n## Contributing\n\nWe welcome contributions! Here's how you can help:\n\n### Reporting Issues\n- Check [existing issues](https://github.com/LeeSinLiang/commitLM/issues) first\n- Provide clear reproduction steps\n- Include system info from `commitlm status`\n\n### Feature Requests\n- Open an issue with the `enhancement` label\n- Describe the use case and expected behavior\n\n### Development Setup\n```bash\n# Clone the repository\ngit clone https://github.com/LeeSinLiang/commitLM.git\ncd commitLM\n\n# Create virtual environment\npython -m venv venv\nsource venv/bin/activate  # On Windows: venv\\Scripts\\activate\n\n# Install in editable mode with dev dependencies\npip install -e \".[dev]\"\n\n# Run tests\npytest\n\n# Run linting\nblack commitlm/\nruff check commitlm/\n```\n\n### Pull Requests\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Make your changes\n4. Add tests for new functionality\n5. Ensure all tests pass (`pytest`)\n6. Run linters (`black .` and `ruff check .`)\n7. Commit your changes (use CommitLM for commit messages!)\n8. Push to your fork\n9. Open a Pull Request\n\n## License\n\nCommitLM is licensed under the **Apache License 2.0**. See [LICENSE](LICENSE) for full details.\nSee [NOTICE](NOTICE) file for third-party attributions.\n\n## Support\n\n- **Issues**: [GitHub Issues](https://github.com/LeeSinLiang/commitLM/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/LeeSinLiang/commitLM/discussions)\n- **PyPI**: [https://pypi.org/project/commitlm/](https://pypi.org/project/commitlm/)\n\n---\n\n*If CommitLM saves you time, consider giving it a \u2b50 on GitHub!*\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "AI-powered Git Documentation & Commit Messages",
    "version": "1.0.8",
    "project_urls": {
        "Changelog": "https://github.com/LeeSinLiang/commitLM/releases",
        "Homepage": "https://github.com/LeeSinLiang/commitLM",
        "Issues": "https://github.com/LeeSinLiang/commitLM/issues",
        "Repository": "https://github.com/LeeSinLiang/commitLM"
    },
    "split_keywords": [
        "git",
        " commit",
        " documentation",
        " ai",
        " llm",
        " automation",
        " developer-tools"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "53d9fdf8b4f8d6fa162daa4f731ca65a5b2a17ff3396f7b1d8a3b042b052e245",
                "md5": "1eba684c3ea686986d1b19ba5afb6a2b",
                "sha256": "9bee284b3012bca42baf6f24ed026e1bb592d13c71e3b4687aa6561976a7fe65"
            },
            "downloads": -1,
            "filename": "commitlm-1.0.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1eba684c3ea686986d1b19ba5afb6a2b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 45169,
            "upload_time": "2025-10-14T21:39:43",
            "upload_time_iso_8601": "2025-10-14T21:39:43.928877Z",
            "url": "https://files.pythonhosted.org/packages/53/d9/fdf8b4f8d6fa162daa4f731ca65a5b2a17ff3396f7b1d8a3b042b052e245/commitlm-1.0.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d7ebb118bf9f29636db5ae4058972e36eeb990a05c163228cb5b5a3f16ad88f4",
                "md5": "39eb0178dcc50577a392c609fd11a0de",
                "sha256": "67b9669d47c56185d8e5f25148093a6832a03999545ad49575c53c00f12c22d5"
            },
            "downloads": -1,
            "filename": "commitlm-1.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "39eb0178dcc50577a392c609fd11a0de",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 49007,
            "upload_time": "2025-10-14T21:39:44",
            "upload_time_iso_8601": "2025-10-14T21:39:44.924823Z",
            "url": "https://files.pythonhosted.org/packages/d7/eb/b118bf9f29636db5ae4058972e36eeb990a05c163228cb5b5a3f16ad88f4/commitlm-1.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-14 21:39:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "LeeSinLiang",
    "github_project": "commitLM",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "gitpython",
            "specs": []
        },
        {
            "name": "transformers",
            "specs": [
                [
                    ">=",
                    "4.35.0"
                ]
            ]
        },
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "accelerate",
            "specs": []
        },
        {
            "name": "optimum",
            "specs": []
        },
        {
            "name": "bitsandbytes",
            "specs": []
        },
        {
            "name": "click",
            "specs": [
                [
                    ">=",
                    "8.0"
                ]
            ]
        },
        {
            "name": "rich",
            "specs": []
        },
        {
            "name": "InquirerPy",
            "specs": []
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    ">=",
                    "2.0"
                ]
            ]
        },
        {
            "name": "python-dotenv",
            "specs": []
        },
        {
            "name": "jinja2",
            "specs": []
        },
        {
            "name": "google-genai",
            "specs": []
        },
        {
            "name": "google-api-core",
            "specs": []
        },
        {
            "name": "anthropic",
            "specs": []
        },
        {
            "name": "openai",
            "specs": []
        },
        {
            "name": "pytest",
            "specs": []
        },
        {
            "name": "pytest-cov",
            "specs": []
        },
        {
            "name": "black",
            "specs": []
        },
        {
            "name": "ruff",
            "specs": []
        },
        {
            "name": "watchdog",
            "specs": []
        }
    ],
    "lcname": "commitlm"
}
        
Elapsed time: 1.06060s