# GitLlama 🦙
A git automation tool that uses AI to analyze repositories and make code changes. GitLlama clones a repository, analyzes the codebase, selects an appropriate branch, and makes iterative improvements.
## Core Design: 4 Query Types with Templates
GitLlama's AI decision-making is built on a comprehensive 4-query system with structured templates:
- **🔤 Multiple Choice**: Lettered answers (A, B, C, etc.) for deterministic decisions
- **📝 Single Word**: Single word responses perfect for variable storage and simple classifications
- **📰 Open Response**: Essay-style detailed responses for complex analysis and explanations
- **📄 File Write**: Complete file content generation with automatic formatting cleanup
Each query type uses carefully crafted templates with variable substitution, ensuring consistent, high-quality AI interactions while maintaining full Congressional oversight.
## Installation
```bash
pip install gitllama
```
## Prerequisites
GitLlama requires Ollama for AI features:
```bash
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Start Ollama server
ollama serve
# Pull a model
ollama pull gemma3:4b
```
## Usage
### Basic usage:
```bash
gitllama https://github.com/user/repo.git
```
### With custom model:
```bash
gitllama https://github.com/user/repo.git --model llama3:8b
```
### With specific branch:
```bash
gitllama https://github.com/user/repo.git --branch feature/my-improvement
```
### Verbose mode:
```bash
gitllama https://github.com/user/repo.git --verbose
```
## How It Works
### 1. Repository Analysis
GitLlama analyzes the repository using hierarchical summarization:
- Scans all text files and documentation
- Groups files into chunks that fit the AI's context window
- Analyzes each chunk independently
- Merges summaries hierarchically
- Produces structured insights about the project
### 2. Branch Selection
The AI makes branch decisions using multiple choice queries:
- Analyzes existing branches
- Scores reuse potential
- Decides: REUSE or CREATE
- Selects branch type: feature, fix, docs, or chore
### 3. File Modification
Iterative development with validation:
- AI selects files to modify (multiple choice)
- Generates content (open response)
- Validates changes (multiple choice)
- Continues until satisfied
### 4. Commit and Push
- Generates commit message (open response)
- Commits changes
- Pushes to remote repository
## AI Query Interface
The 4-query system provides the right tool for every task:
```python
# Multiple choice for deterministic decisions with lettered answers
result = ai.multiple_choice(
question="Should we reuse an existing branch?",
options=["REUSE", "CREATE"],
context="Current branch: main"
)
# Returns: letter='A', index=0, value='REUSE'
# Single word for variable storage and classifications
result = ai.single_word(
question="What programming language is this?",
context="Repository analysis shows..."
)
# Returns: word='Python'
# Open response for detailed analysis and explanations
result = ai.open(
prompt="Explain the architecture benefits",
context="Codebase structure and requirements..."
)
# Returns: content='Detailed essay-style response...'
# File write for generating complete file content
result = ai.file_write(
requirements="Create a Python configuration file with database settings",
context="Application uses PostgreSQL and Redis..."
)
# Returns: content='# config.py\nDATABASE_URL = "postgres://..."'
```
## Automatic Context Compression
GitLlama now includes intelligent context compression to handle large codebases that exceed model context limits:
### How It Works
When the AI context window is too large, GitLlama automatically:
1. Detects when context exceeds 70% of model capacity (reserves 30% for prompt/response)
2. Splits context into chunks and compresses each using AI summarization
3. Extracts only information relevant to the current query
4. Performs multiple compression rounds if needed (up to 3 rounds)
5. Tracks compression metrics for performance monitoring
### Features
- **Automatic Detection**: No configuration needed - compression triggers automatically
- **Query-Focused**: Compression preserves information relevant to the specific question
- **Multi-Round Compression**: Can perform up to 3 compression rounds for very large contexts
- **Metrics Tracking**: Records compression events, ratios, and success rates
- **Fallback Handling**: Gracefully degrades to truncation if compression fails
### Performance
- Typical compression ratios: 40-60% size reduction
- Minimal impact on response quality for focused queries
- Compression time: 2-5 seconds per round depending on context size
This feature ensures GitLlama can work with repositories of any size without manual context management.
## Congressional Oversight System 🏛️
GitLlama includes a sophisticated Congressional voting system that provides governance and validation of AI decisions through three Representatives embodying fundamental aspects of humanity:
### The Three Representatives
Each Representative embodies a core aspect of human nature and decision-making:
- **Caspar the Rational**: Embodies logic, reason, and analytical thinking - values evidence, consistency, and systematic approaches
- **Melchior the Visionary**: Embodies creativity, innovation, and progress - values bold ideas, transformation, and breakthrough thinking
- **Balthasar the Compassionate**: Embodies wisdom, empathy, and moral judgment - values fairness, kindness, and human dignity
### How It Works
- **Values-Based Evaluation**: Representatives vote based on their core values and personality traits, regardless of topic expertise
- **Individual AI Models**: Each Representative can use different AI models optimized for their reasoning style
- **Templated Prompts**: Dynamic prompt generation based on each Representative's likes, dislikes, and personality
### Features
- **Automatic Evaluation**: All AI responses get Congressional review
- **Majority Voting**: Decisions require majority approval (2 out of 3 votes)
- **Detailed Reasoning**: Each Representative provides confidence scores and reasoning
- **Full Transparency**: All votes and reasoning included in HTML reports
- **Interactive Reports**: Hover over vote symbols to see detailed Representative feedback
### In Reports
Congressional votes appear inline with AI exchanges:
- 🏛️ Congressional icon shows voting occurred
- ✓/✗ symbols show individual Representative votes
- Hover tooltips reveal detailed reasoning and confidence scores
- Summary section shows overall voting patterns by Representative
This system ensures AI decisions undergo democratic review, adding a layer of validation and transparency to the automation process.
## Query Type System 🎯
GitLlama's enhanced 4-query system provides specialized tools for every AI interaction need:
### 🔤 Multiple Choice Query
- **Purpose**: Deterministic decisions requiring selection from predefined options
- **Returns**: Letter (A, B, C, etc.), index, and option value
- **Template**: Structured prompt with lettered options and clear instructions
- **Best for**: Branch decisions, operation types, validation checks
- **Example**: Choose deployment strategy, select file operation, pick testing approach
### 📝 Single Word Query
- **Purpose**: Variable storage and simple classifications requiring one-word answers
- **Returns**: Single cleaned word with confidence score
- **Template**: Focused prompt emphasizing single-word response requirement
- **Best for**: Language detection, status indicators, simple categorization
- **Example**: Programming language, file type, priority level
### 📰 Open Response Query
- **Purpose**: Detailed analysis, explanations, and complex reasoning tasks
- **Returns**: Comprehensive text content with proper formatting
- **Template**: Essay-style prompt encouraging detailed, structured responses
- **Best for**: Architecture explanations, code analysis, documentation generation
- **Example**: Explain design patterns, analyze code complexity, describe system benefits
### 📄 File Write Query
- **Purpose**: Complete file content generation ready for direct use
- **Returns**: Clean file content with automatic formatting and code block removal
- **Template**: File-focused prompt with clear content requirements
- **Best for**: Configuration files, code generation, documentation creation
- **Example**: Generate config.py, create test files, produce README content
### Template Features
- **Variable Substitution**: `{context}`, `{question}`, `{options}`, `{prompt}`, `{requirements}`
- **Consistent Formatting**: Standardized instruction patterns across all query types
- **Context Integration**: Smart context compression and variable tracking
- **Congressional Oversight**: All queries evaluated by three Representatives with detailed reasoning
## Architecture
```
gitllama/
├── cli.py # Command-line interface
├── core/
│ ├── git_operations.py # Git automation
│ └── coordinator.py # AI workflow coordination
├── ai/
│ ├── client.py # Ollama API client
│ ├── query.py # Multiple choice / open response interface
│ ├── congress.py # Congressional voting system for AI validation
│ ├── context_compressor.py # Automatic context compression
│ └── parser.py # Response parsing and code extraction
├── analyzers/
│ ├── project.py # Repository analysis
│ └── branch.py # Branch selection logic
├── modifiers/
│ └── file.py # File modification workflow
└── utils/
├── metrics.py # Metrics collection and tracking
├── context_tracker.py # Context and variable tracking for reports
└── reports.py # HTML report generation
```
### Key Components:
- **AIQuery**: 4-query interface (multiple_choice, single_word, open, file_write) with templated prompts and automatic compression
- **Congress**: Congressional voting system with three Representatives for AI validation across all query types
- **ContextCompressor**: Intelligent context compression for large codebases
- **ContextTracker**: Tracks all variables and prompt-response pairs for detailed reports
- **MetricsCollector**: Tracks AI calls, compressions, and performance metrics
- **ProjectAnalyzer**: Hierarchical analysis of repository structure
- **BranchAnalyzer**: Branch selection using multiple choice decisions with lettered answers
- **FileModifier**: File generation using dedicated file_write queries with automatic cleanup
- **ResponseParser**: Extracts clean results from all query types with appropriate parsing
## Reports
GitLlama generates HTML reports with:
- Timeline of AI decisions with color-coded variable highlighting across all 4 query types
- Congressional voting results with interactive tooltips for every query
- Query type breakdown (multiple_choice, single_word, open, file_write)
- Branch selection rationale using lettered multiple choice responses
- File generation details from dedicated file_write queries
- API usage statistics by query type
- Context window tracking and template usage
- Compression events and metrics
- Performance analytics across all query types
- Representative voting patterns and unanimity rates for each query type
Reports are saved to `gitllama_reports/` directory.
## Compatible Models
Works with any Ollama model:
- `gemma3:4b` - Fast and efficient (default)
- `llama3.2:1b` - Ultra-fast for simple tasks
- `codellama:7b` - Optimized for code
- `mistral:7b` - General purpose
- `gemma2:2b` - Very fast
## What Gets Analyzed
- Source code (Python, JavaScript, Java, Go, Rust, etc.)
- Configuration files (JSON, YAML, TOML)
- Documentation (Markdown, README)
- Build files (Dockerfile, package.json)
- Scripts (Shell, Batch)
## Performance
- Small repos (<100 files): ~30 seconds
- Medium repos (100-500 files): 1-2 minutes
- Large repos (500+ files): 2-5 minutes
## Development
```bash
git clone https://github.com/your-org/gitllama.git
cd gitllama
pip install -e ".[dev]"
# Run tests
pytest
```
## Troubleshooting
### Ollama not available?
```bash
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Start Ollama
ollama serve
```
### Context window too small?
```bash
# Use a model with larger context
gitllama repo.git --model mistral:7b
```
### Analysis taking too long?
```bash
# Use a smaller model
gitllama repo.git --model llama3.2:1b
```
## License
GPL v3 - see LICENSE file
## Contributing
Contributions welcome! The modular architecture makes it easy to extend.
---
**Note**: GitLlama requires git credentials configured for pushing changes. Ensure you have appropriate repository access before use.
Raw data
{
"_id": null,
"home_page": null,
"name": "todollama",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Your Name <your.email@example.com>",
"keywords": "ai, code-generation, containerization, llm, ollama, python, todo",
"author": null,
"author_email": "Your Name <your.email@example.com>",
"download_url": "https://files.pythonhosted.org/packages/36/2d/27e52deaaecba297b89d3367153217ddcddf41b7dcecbcba1148142c5321/todollama-0.8.0.tar.gz",
"platform": null,
"description": "# GitLlama \ud83e\udd99\n\nA git automation tool that uses AI to analyze repositories and make code changes. GitLlama clones a repository, analyzes the codebase, selects an appropriate branch, and makes iterative improvements.\n\n## Core Design: 4 Query Types with Templates\n\nGitLlama's AI decision-making is built on a comprehensive 4-query system with structured templates:\n\n- **\ud83d\udd24 Multiple Choice**: Lettered answers (A, B, C, etc.) for deterministic decisions\n- **\ud83d\udcdd Single Word**: Single word responses perfect for variable storage and simple classifications\n- **\ud83d\udcf0 Open Response**: Essay-style detailed responses for complex analysis and explanations \n- **\ud83d\udcc4 File Write**: Complete file content generation with automatic formatting cleanup\n\nEach query type uses carefully crafted templates with variable substitution, ensuring consistent, high-quality AI interactions while maintaining full Congressional oversight.\n\n## Installation\n\n```bash\npip install gitllama\n```\n\n## Prerequisites\n\nGitLlama requires Ollama for AI features:\n\n```bash\n# Install Ollama\ncurl -fsSL https://ollama.com/install.sh | sh\n\n# Start Ollama server\nollama serve\n\n# Pull a model\nollama pull gemma3:4b\n```\n\n## Usage\n\n### Basic usage:\n\n```bash\ngitllama https://github.com/user/repo.git\n```\n\n### With custom model:\n\n```bash\ngitllama https://github.com/user/repo.git --model llama3:8b\n```\n\n### With specific branch:\n\n```bash\ngitllama https://github.com/user/repo.git --branch feature/my-improvement\n```\n\n### Verbose mode:\n\n```bash\ngitllama https://github.com/user/repo.git --verbose\n```\n\n## How It Works\n\n### 1. Repository Analysis\nGitLlama analyzes the repository using hierarchical summarization:\n- Scans all text files and documentation\n- Groups files into chunks that fit the AI's context window\n- Analyzes each chunk independently\n- Merges summaries hierarchically\n- Produces structured insights about the project\n\n### 2. Branch Selection\nThe AI makes branch decisions using multiple choice queries:\n- Analyzes existing branches\n- Scores reuse potential\n- Decides: REUSE or CREATE\n- Selects branch type: feature, fix, docs, or chore\n\n### 3. File Modification\nIterative development with validation:\n- AI selects files to modify (multiple choice)\n- Generates content (open response)\n- Validates changes (multiple choice)\n- Continues until satisfied\n\n### 4. Commit and Push\n- Generates commit message (open response)\n- Commits changes\n- Pushes to remote repository\n\n## AI Query Interface\n\nThe 4-query system provides the right tool for every task:\n\n```python\n# Multiple choice for deterministic decisions with lettered answers\nresult = ai.multiple_choice(\n question=\"Should we reuse an existing branch?\",\n options=[\"REUSE\", \"CREATE\"],\n context=\"Current branch: main\"\n)\n# Returns: letter='A', index=0, value='REUSE'\n\n# Single word for variable storage and classifications\nresult = ai.single_word(\n question=\"What programming language is this?\",\n context=\"Repository analysis shows...\"\n)\n# Returns: word='Python'\n\n# Open response for detailed analysis and explanations\nresult = ai.open(\n prompt=\"Explain the architecture benefits\",\n context=\"Codebase structure and requirements...\"\n)\n# Returns: content='Detailed essay-style response...'\n\n# File write for generating complete file content\nresult = ai.file_write(\n requirements=\"Create a Python configuration file with database settings\",\n context=\"Application uses PostgreSQL and Redis...\"\n)\n# Returns: content='# config.py\\nDATABASE_URL = \"postgres://...\"'\n```\n\n## Automatic Context Compression\n\nGitLlama now includes intelligent context compression to handle large codebases that exceed model context limits:\n\n### How It Works\n\nWhen the AI context window is too large, GitLlama automatically:\n1. Detects when context exceeds 70% of model capacity (reserves 30% for prompt/response)\n2. Splits context into chunks and compresses each using AI summarization\n3. Extracts only information relevant to the current query\n4. Performs multiple compression rounds if needed (up to 3 rounds)\n5. Tracks compression metrics for performance monitoring\n\n### Features\n\n- **Automatic Detection**: No configuration needed - compression triggers automatically\n- **Query-Focused**: Compression preserves information relevant to the specific question\n- **Multi-Round Compression**: Can perform up to 3 compression rounds for very large contexts\n- **Metrics Tracking**: Records compression events, ratios, and success rates\n- **Fallback Handling**: Gracefully degrades to truncation if compression fails\n\n### Performance\n\n- Typical compression ratios: 40-60% size reduction\n- Minimal impact on response quality for focused queries\n- Compression time: 2-5 seconds per round depending on context size\n\nThis feature ensures GitLlama can work with repositories of any size without manual context management.\n\n## Congressional Oversight System \ud83c\udfdb\ufe0f\n\nGitLlama includes a sophisticated Congressional voting system that provides governance and validation of AI decisions through three Representatives embodying fundamental aspects of humanity:\n\n### The Three Representatives\n\nEach Representative embodies a core aspect of human nature and decision-making:\n\n- **Caspar the Rational**: Embodies logic, reason, and analytical thinking - values evidence, consistency, and systematic approaches\n- **Melchior the Visionary**: Embodies creativity, innovation, and progress - values bold ideas, transformation, and breakthrough thinking \n- **Balthasar the Compassionate**: Embodies wisdom, empathy, and moral judgment - values fairness, kindness, and human dignity\n\n### How It Works\n\n- **Values-Based Evaluation**: Representatives vote based on their core values and personality traits, regardless of topic expertise\n- **Individual AI Models**: Each Representative can use different AI models optimized for their reasoning style\n- **Templated Prompts**: Dynamic prompt generation based on each Representative's likes, dislikes, and personality\n\n### Features\n\n- **Automatic Evaluation**: All AI responses get Congressional review\n- **Majority Voting**: Decisions require majority approval (2 out of 3 votes)\n- **Detailed Reasoning**: Each Representative provides confidence scores and reasoning\n- **Full Transparency**: All votes and reasoning included in HTML reports\n- **Interactive Reports**: Hover over vote symbols to see detailed Representative feedback\n\n### In Reports\n\nCongressional votes appear inline with AI exchanges:\n- \ud83c\udfdb\ufe0f Congressional icon shows voting occurred\n- \u2713/\u2717 symbols show individual Representative votes\n- Hover tooltips reveal detailed reasoning and confidence scores\n- Summary section shows overall voting patterns by Representative\n\nThis system ensures AI decisions undergo democratic review, adding a layer of validation and transparency to the automation process.\n\n## Query Type System \ud83c\udfaf\n\nGitLlama's enhanced 4-query system provides specialized tools for every AI interaction need:\n\n### \ud83d\udd24 Multiple Choice Query\n- **Purpose**: Deterministic decisions requiring selection from predefined options\n- **Returns**: Letter (A, B, C, etc.), index, and option value\n- **Template**: Structured prompt with lettered options and clear instructions\n- **Best for**: Branch decisions, operation types, validation checks\n- **Example**: Choose deployment strategy, select file operation, pick testing approach\n\n### \ud83d\udcdd Single Word Query \n- **Purpose**: Variable storage and simple classifications requiring one-word answers\n- **Returns**: Single cleaned word with confidence score\n- **Template**: Focused prompt emphasizing single-word response requirement\n- **Best for**: Language detection, status indicators, simple categorization\n- **Example**: Programming language, file type, priority level\n\n### \ud83d\udcf0 Open Response Query\n- **Purpose**: Detailed analysis, explanations, and complex reasoning tasks\n- **Returns**: Comprehensive text content with proper formatting\n- **Template**: Essay-style prompt encouraging detailed, structured responses \n- **Best for**: Architecture explanations, code analysis, documentation generation\n- **Example**: Explain design patterns, analyze code complexity, describe system benefits\n\n### \ud83d\udcc4 File Write Query\n- **Purpose**: Complete file content generation ready for direct use\n- **Returns**: Clean file content with automatic formatting and code block removal\n- **Template**: File-focused prompt with clear content requirements\n- **Best for**: Configuration files, code generation, documentation creation\n- **Example**: Generate config.py, create test files, produce README content\n\n### Template Features\n- **Variable Substitution**: `{context}`, `{question}`, `{options}`, `{prompt}`, `{requirements}`\n- **Consistent Formatting**: Standardized instruction patterns across all query types\n- **Context Integration**: Smart context compression and variable tracking\n- **Congressional Oversight**: All queries evaluated by three Representatives with detailed reasoning\n\n## Architecture\n\n```\ngitllama/\n\u251c\u2500\u2500 cli.py # Command-line interface\n\u251c\u2500\u2500 core/\n\u2502 \u251c\u2500\u2500 git_operations.py # Git automation\n\u2502 \u2514\u2500\u2500 coordinator.py # AI workflow coordination\n\u251c\u2500\u2500 ai/\n\u2502 \u251c\u2500\u2500 client.py # Ollama API client\n\u2502 \u251c\u2500\u2500 query.py # Multiple choice / open response interface\n\u2502 \u251c\u2500\u2500 congress.py # Congressional voting system for AI validation\n\u2502 \u251c\u2500\u2500 context_compressor.py # Automatic context compression\n\u2502 \u2514\u2500\u2500 parser.py # Response parsing and code extraction\n\u251c\u2500\u2500 analyzers/\n\u2502 \u251c\u2500\u2500 project.py # Repository analysis\n\u2502 \u2514\u2500\u2500 branch.py # Branch selection logic\n\u251c\u2500\u2500 modifiers/\n\u2502 \u2514\u2500\u2500 file.py # File modification workflow\n\u2514\u2500\u2500 utils/\n \u251c\u2500\u2500 metrics.py # Metrics collection and tracking\n \u251c\u2500\u2500 context_tracker.py # Context and variable tracking for reports\n \u2514\u2500\u2500 reports.py # HTML report generation\n```\n\n### Key Components:\n\n- **AIQuery**: 4-query interface (multiple_choice, single_word, open, file_write) with templated prompts and automatic compression\n- **Congress**: Congressional voting system with three Representatives for AI validation across all query types\n- **ContextCompressor**: Intelligent context compression for large codebases\n- **ContextTracker**: Tracks all variables and prompt-response pairs for detailed reports\n- **MetricsCollector**: Tracks AI calls, compressions, and performance metrics\n- **ProjectAnalyzer**: Hierarchical analysis of repository structure\n- **BranchAnalyzer**: Branch selection using multiple choice decisions with lettered answers\n- **FileModifier**: File generation using dedicated file_write queries with automatic cleanup\n- **ResponseParser**: Extracts clean results from all query types with appropriate parsing\n\n## Reports\n\nGitLlama generates HTML reports with:\n- Timeline of AI decisions with color-coded variable highlighting across all 4 query types\n- Congressional voting results with interactive tooltips for every query\n- Query type breakdown (multiple_choice, single_word, open, file_write) \n- Branch selection rationale using lettered multiple choice responses\n- File generation details from dedicated file_write queries\n- API usage statistics by query type\n- Context window tracking and template usage\n- Compression events and metrics\n- Performance analytics across all query types\n- Representative voting patterns and unanimity rates for each query type\n\nReports are saved to `gitllama_reports/` directory.\n\n## Compatible Models\n\nWorks with any Ollama model:\n- `gemma3:4b` - Fast and efficient (default)\n- `llama3.2:1b` - Ultra-fast for simple tasks\n- `codellama:7b` - Optimized for code\n- `mistral:7b` - General purpose\n- `gemma2:2b` - Very fast\n\n## What Gets Analyzed\n\n- Source code (Python, JavaScript, Java, Go, Rust, etc.)\n- Configuration files (JSON, YAML, TOML)\n- Documentation (Markdown, README)\n- Build files (Dockerfile, package.json)\n- Scripts (Shell, Batch)\n\n## Performance\n\n- Small repos (<100 files): ~30 seconds\n- Medium repos (100-500 files): 1-2 minutes\n- Large repos (500+ files): 2-5 minutes\n\n## Development\n\n```bash\ngit clone https://github.com/your-org/gitllama.git\ncd gitllama\npip install -e \".[dev]\"\n\n# Run tests\npytest\n```\n\n## Troubleshooting\n\n### Ollama not available?\n```bash\n# Check if Ollama is running\ncurl http://localhost:11434/api/tags\n\n# Start Ollama\nollama serve\n```\n\n### Context window too small?\n```bash\n# Use a model with larger context\ngitllama repo.git --model mistral:7b\n```\n\n### Analysis taking too long?\n```bash\n# Use a smaller model\ngitllama repo.git --model llama3.2:1b\n```\n\n## License\n\nGPL v3 - see LICENSE file\n\n## Contributing\n\nContributions welcome! The modular architecture makes it easy to extend.\n\n---\n\n**Note**: GitLlama requires git credentials configured for pushing changes. Ensure you have appropriate repository access before use.",
"bugtrack_url": null,
"license": "GPL-3.0-or-later",
"summary": "AI-powered Python code generation tool - creates containerized Python applications from TODO.md descriptions",
"version": "0.8.0",
"project_urls": {
"Homepage": "https://github.com/your-org/todollama",
"Issues": "https://github.com/your-org/todollama/issues",
"Repository": "https://github.com/your-org/todollama.git"
},
"split_keywords": [
"ai",
" code-generation",
" containerization",
" llm",
" ollama",
" python",
" todo"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "4dce2a94fd54a05709555328afcecaf22292b5279e7a81805d24d458061cce55",
"md5": "87dbc2bdb165560c180f9e0eb9d6fc31",
"sha256": "363e305749ed75322c77505643a5cf42ffa6bbe0236649cf3d84b931f117c8bd"
},
"downloads": -1,
"filename": "todollama-0.8.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "87dbc2bdb165560c180f9e0eb9d6fc31",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 85583,
"upload_time": "2025-08-25T03:31:22",
"upload_time_iso_8601": "2025-08-25T03:31:22.113866Z",
"url": "https://files.pythonhosted.org/packages/4d/ce/2a94fd54a05709555328afcecaf22292b5279e7a81805d24d458061cce55/todollama-0.8.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "362d27e52deaaecba297b89d3367153217ddcddf41b7dcecbcba1148142c5321",
"md5": "333aa603f6c19cad9a9b954d6e2d2923",
"sha256": "ab9f53b7eec72e62e61f306451bdcd376c9b25e1aa8883546ee9fa2e99bafdd7"
},
"downloads": -1,
"filename": "todollama-0.8.0.tar.gz",
"has_sig": false,
"md5_digest": "333aa603f6c19cad9a9b954d6e2d2923",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 74557,
"upload_time": "2025-08-25T03:31:23",
"upload_time_iso_8601": "2025-08-25T03:31:23.200780Z",
"url": "https://files.pythonhosted.org/packages/36/2d/27e52deaaecba297b89d3367153217ddcddf41b7dcecbcba1148142c5321/todollama-0.8.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-25 03:31:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "your-org",
"github_project": "todollama",
"github_not_found": true,
"lcname": "todollama"
}